Hardware Virtualization Vs OS Virtualization Vs Application Virtualization
In this article I'd like to touch briefly on the different level of virtualization technologies that I see being discussed lately. I am not going to talk about specific products but I'd rather keep this at an higher level referencing product implementations just as examples. Lately I have been working on a "Virtual Appliance" presentation that I did for an IBM internal symposium and while I was trying to picture the advantages of a "Virtual Appliance" a doubt raised in my mind: isn't this the same concept we are using to describe the benefits of "Application Virtualization"? And the (short) answer is "yes it is indeed". But let's dig into the (long) answer.
Let's start describing the different level of virtualization available today in the market. They are:
- Hardware Virtualization (a standard OS gets installed on a fictitious piece of hardware)
- Operating System virtualization (a standard application gets installed on a fictitious OS)
- Application Virtualization (an application is packaged/shielded to run on a standard OS)
The concept of Hardware Virtualization is straightforward: you cheat your OS so that you pretend to have more hardware resources that you have in reality. So out of a single 4-cpu physical system you could carve out 12 1-cpu virtual servers and install 12 independent standard OS'es as if you were installing these on 12 independent pieces of hardware. Example of products and technologies that implement this concept are VMware ESX, MS Virtual Server, Xen and others.
The concept of Operating System Virtualization might be a bit more cumbersome to understand but yet not rocket science: this time, instead of being the hardware to cheat the OS as in the hardware virtualization model, we move a level above and it's the OS that cheats the Application. So you basically have one piece of hardware, one single "base" OS image that you can "multi-instantiate" if you will in independent containers. This means that, ideally, when you install your application, the application
The concept of Application Virtualization is easy. In a nutshell virtualizing an application means re-packaging this application somehow and redistributing this same application under a different shape / format. This new "format" is usually a single big file that gets "copied" on top of an OS and that doesn't need to be "installed". This allows running different and potentially incompatible applications on the same Operating System without each application stepping over each other due to DLL conflicts or registry incompatibilities. This is because these applications are basically shielded and are distributed as monolithic files that contain everything (DLL's, custom registry entries etc). Example of products and technologies that implement this concept are Microsoft SoftGrid, Thinstall and others.
Have you noticed anything?
1Hardware Virtualization: 1 hardware -> n OS images -> n Applications
2Operating System virtualization 1 hardware -> 1 OS image -> n Applications
3Application Virtualization 1 hardware -> 1 OS image -> n Applications
Apparently OS Virtualization and Application Virtualization are not that different after all. Certainly and granted that they use completely different technologies to implement what they do, the end result appears to be similar: they both make a single OS instance run different, dissimilar and perhaps incompatible applications without any conflict of the sort. On the other hand Hardware Virtualization seems to be different from the other two models. In fact whereas both OS Virtualization and Application Virtualization leverage a single OS instance to support multiple workload, Hardware Virtualization requires you to load multiple OS instances (typically with a 1 to 1 mapping to applications) in order to do the same thing. Apparently.
In fact at the beginning I have mentioned that I have been working on the concept of "Virtual Appliances" for a while now. This concept is intriguing and I will spend a few lines on it. Basically the point behind it is the acknowledgment that a multi-purpose OS (Windows / Linux) has become a very complex stack of software that is supposed to provide, potentially, thousands of functionalities ranging from hardware support, to application API support, from HA cluster to Security, from Backup Services to Network Services so on and so forth. Clearly not all these potentials are required nor exploited in any deployment so most of the time, due to this "1 Application to 1 OS" mapping the standard multi-purpose OS has become a sort of 2GB DLL attached to the application where, most likely, a little portion of this 2GB code is actually used at run time. Not to mention that most of these infrastructure services (hardware support, backup, HA, etc etc) are "draining" quickly into the virtual infrastructure framework leaving this duplicated functions in the Guest OS not even utilized at all. So in short the idea behind this "Virtual Appliance" concept is to re-work the entire datacenter stack so that these infrastructure services are provided by the hardware virtualization layer (and its management tools) and let the application run above it bundled with a
This sounds interesting. If you have followed the flow it won't take too much to understand that this industry is moving faster and faster to re-write the concept of the OS. In a scenario like the one I tried to briefly depict the concept of the OS is better tied to what the virtual infrastructure does and no longer to what it's included in the virtual machine minidisk. So the hardware virtualization layer is to provide all the OS-like infrastructure services we described above while the virtual machine only have to provide the business logic (to the point where the ISV providing the application will bundle a very tailored and customized minikernel that will allow the application to boot and operate smoothly within a virtual environment). So instead of having a 2GB guest OS + an application you would end up having a few KB/MB thin-OS + an application. This means that, if we consider the hardware virtualization layer "the Operating System" and the virtual appliance the application .... our table would now look different:
1Hardware Virtualization: 1 hardware -> 1 OS image -> n Applications (i.e. Virtual Appliances)
2Operating System virtualization 1 hardware -> 1 OS image -> n Applications
3Application Virtualization 1 hardware -> 1 OS image -> n Applications
(If you want to know more about Virtual Appliances, VMware has some good info here: http://www.vmware.com/appliances).
Let's look at a real life architectural example:
The picture above is meant to compare a product implementation of application virtualization (i.e. Thinstall) to the concept of a virtual appliance running on a virtual infrastructure. Specifically you can easily see the convergence of the two:
Thininstall | Virtual Appliances |
---|---|
Application | Application |
Thinstall Virtual OS | "Tailored" OS |
(Windows) Operating System | Hypervisor |
Isn't this the same thing with different naming?
So you might wonder at this point why we need to have 3 different models (Hardware Virtualization, OS Virtualization and Application Virtualization) if they do the same thing.... Well in reality they do the same thing but with different characteristics. Closing this thread I'll try to position these three models.
Hardware Virtualization (and the concept of Virtual Appliances) is certainly going to matter more in the datacenter environments where you have heterogeneous back-end services to run and where the infrastructure (i.e. OS) requirements are: security, resiliency and robustness. These OS characteristics are certainly met by the Hypervisor/Virtual Infrastructure concept which would be the ideal platform to run back-end workloads.
OS Virtualization is certainly going to matter more in the datacenter environments where you have homogeneous back-end services. If you want to maintain a certain level of independence between the various service containers but yet leverage a common code base of the OS for easy management this solution might be the right choice. A typical example of where this model can fit is web server farms where you can exploit the advantages of a single OS image supporting multiple but yet independent homogeneous environments.
Application Virtualization is certainly going to be very relevant in the personal productivity (i.e. PC) environments where you have heterogeneous GUI applications to run and where the local end-user OS requirements are: easy of use and flexibility. These OS characteristics are certainly met by the standard Windows XP / Vista experience where you could easily run multiple heterogeneous and potentially incompatible interactive applications.
So, in conclusion, I don't see in the future a stacking use of these three different technologies together to solve a given problem but I would rather see the usage of either one to solve a specific problem given a very specific scenario. My last take is that, depending on the success of the Virtual Appliance concept, the OS Virtualization model might be squeezed to become a niche model given the flexibility that hardware virtualization might provide even for homogeneous deployments which is the primary target for OS virtualization. This would leave hardware and application virtualization the two predominant models to simplify, respectively, the server and client IT stacks.
Massimo.