I recently read an article (published August 2013), listing all the reasons not to virtualise certain systems. It got me thinking about some of those people who still think that virtualization is new, not stable and more complicated.

Why Virtualize?

There are hundreds of articles about why virtualisation is good for your datacenter, good for energy saving, good for consolidation and space, makes creation of servers faster and increases security. However, I don’t see much said about the reasons to virtualise those troublesome or previously skipped servers.

There was a time, back in the early days of virtualisation, where software vendors would refuse to support their products when it was virtual, when there were requirements for physical devices that could not be handled by a hypervisor, and there were concerns over performance and security. I still see – in almost every datacentre if visit – a desktop PC or two, and a dusty old standalone physical server, left physical because of the bad old days where it was safer to leave it physical.

Times have changed, you can virtualise almost anything.

The barriers are gone, but there may still be the question about why bother to virtualize? It’s perceived as an effort to do, a risk or not really required (after all, that old PC is never touched by anyone and it hasn’t been rebooted in over a year…). Well, if you have this train of thought (and you don’t patch those desktop PCs in your datacentre), then you probably need to understand the benefits.

 A physical server’s structure

Let’s assume for a second that you have a server-class system. Something like a Dell, HP, Compaq (if it’s that old!) or IBM server – properly set up at one time, and it’s running an application that has not been virtualised.

READ ARTICLE:   n-1 is dead, long live N+45

PhysicalStructure

For this example, let’s assume that this server has have an Application or two, and a service, running on Windows. That’s the bit you want.

Below Windows, there are the manufacturer provided drivers – for the network and disk, probably some for the video card and other components (even if they are not needed by the application). These drivers need to be updated, and the firmware for the hardware will also need to be updated – both the firmware and drivers will need to be at levels that each other support.

In between Windows and the drivers, there will be management interfaces, tools to modify configuration such as network card teaming/bonding, something to set up RAID, and then over the top of that will be management and monitoring utilities, managing everything from fan speeds and temperatures, to power supply and integrated components health. For a physical server, you need that. When you cleanup after a P2V, you will see really how much of this stuff is hidden in there. If you can remember back to the steps to build a physical server, the installation of hardware drivers and manufacturer management tools took almost as long as installing Windows – and often needed multiple reboots.

The thin VMtools layer

So, here’s a big part of why bother to virtualise – and the point of this article. When you have a virtual machine, there is no need for any of those management components – in fact, there are many people who forget to uninstall them when they convert from physical to virtual (P2V). It’s just clutter – you don’t need them at all (even if your ESXi hosts are IBM, HP, Dell etc – the management is done by the Hypervisor layer, not at each virtual machine.

READ ARTICLE:   9 big mistakes in disaster recovery planning (DRP)

VirtualStructure

All that you need is VMTools – the driver package that is inserted as a virtual CD into a VM from the VM menu. It’s all the drivers that you need, plus the link that allows vCenter to obtain all the management, monitoring and administrative information from the Virtual Machine.

All the hardware monitoring and management is handled by CIM on the Hypervisor, to vCenter – not needed on each VM.

So, no CPU cycles are wasted by a VM trying to manage hardware, less software components to crash or fail, and less complexity with trying to maintain version compatibility.

So, at the VM level at least, things are simpler, more lean, and all-round easier.

Where’s the complexity?

Yes, there is more complexity in getting a virtual environment up and running – using SANs and HBAs (it’s 2015 now, so this could be slightly simpler by having a NAS, Virtual SAN or HyperConverged system), more of a step getting to grips with skills required for virtualisation (security, resource pooling, optimisation etc.), and potentially issues like licensing and support (particularly for old applications and systems) may be a hurdle.

But, there are more benefits in virtualisation that can be gained for even older systems that have been skipped or left alone for years.

Share this knowledge