I regularly have to provide guidance on sizing new hardware for a project, utilising VMware vSphere for the virtualisation platform. The question of sizing VMware hosts does not have a simple answer, as the answer depends upon the needs of the applications being put on the hardware. For example, a database will have a need for high transactional I/O and large memory footprint, and fileservers or stateless web/application servers will have lower demands.

VMware provides – through partners and the VMware  Capacity Planner – analysis services for providing a more definitive answer to this question, or you can use tools like vCenter Operations Manager.

The 3 rules for ESXi host sizing

However, there are three general rules for selecting the sizes of components within a vSphere cluster;

  • Maximise the COUNT of the number of CPU cores
  • Get the MOST capacity of RAM
  • If you can, SPEND money on the FASTEST possible storage

These rules largely cover every vSphere cluster design, addressing the needs of over 90% of cases – the most CPU cores, the highest memory capacity and fastest storage.

Three rules for CPU cores, RAM and storage

Allow me to explain a little more about my three rules for host sizing;

Maximise core count, not GHz. In almost every modern ESXi environment I have seen, the CPU load is low Sizing VMware host CPUs – in many cases the consumption is down below 15% of CPU on a single host. Modern CPUs are so fast that a request for resources by Windows can be over and done with in milliseconds, allowing the Hypervisor to schedule the next CPU request to another core. However, if you have fewer cores, other VMs will have to wait for a core to be available to do even the most trivial of task. Hyperthreading can help, but count cores as a priority – a quad core with hyperthreading is not better than a six core without hyperthreading.

READ ARTICLE:   The Hallmarks of a Strategy

Put in the most RAM you can get. You may consider this a contradiction from other posts where I mention that you should size a cluster around the density of VMs, so that in the even of a host evacuation (or a host failure), you are not limited by the maximum 4 or 8 concurrent vMotion maximums. However, if you follow my rule about maximising RAM in each host when specifying your hosts, then you will be able to increase VM density (an hence decrease the cost per VM). Putting lots of RAM into a host also means that there is more RAM available to give to VMs – even though vSphere uses technologies such as Transparent Page Sharing (TPS), memory ballooning, memory compression, and even swapping – you will find better performance if each VM is able to get 100% of it’s committed RAM. So, when sizing your hosts, big memory is more important than memory speed – you will need to find the right balance between RAM capacity, speed and cost.

Buy the fastest storage. You may think I have worded this strangely – to spend money to get fast storage. It’s very true that you get what you pay for, because raw IOPS is affected by more than just the count and speed of backing disks/devices. Without going into the details of examining the technologies used in shared storage to ensure that there are no bottlenecks, single points of failure or sub-optimal designs such as active/passive controllers – you need to identify the software capabilities and features. Nothing affects VM performance more than shared storage performance, so get the best you can afford.

READ ARTICLE:   Install MSI as another user

The finer details – and the exceptions

Of course, you would need to tweak the finer details of your hardware for your design – depending upon your application or project needs. If you are in need of lots of storage capacity, then you can use a tiering solution for your storage. If your application needs high VMware virtual CPUstorage performance then you can also invest in technologies like 10Gbps iSCSI or 16Gbps Fibre Channel infrastructure and switches. If you are building a secure or DMZ environment then you will need to increase the number of NICs. If your application is performing encryption, video/audio encoding etc – then invest in CPUs that support advanced features. If you are in need of high availability in an unstable environment, invest in multiple hosts that are seperated to different power feeds, environments etc. The list of parameters that could change your ESXi host sizing design are more than the scope of this blog post, but the three general rules above will serve you well.

What doesn’t matter (as much) is the vendor of each component. You can mix and match hardware from different suppliers, even components that have different capabilities, but they should be equitable – for example you can have Broadcom 10Gbps NICs in one host and Intel 10Gbps NICs in another. Even components like GPUs can be different and vMotion of View desktops is still permitted. Whilst you could have qLogic or Emulex Fiber Channel HBAs, they should be the same speed – but they don’t have to be. I’m prepared for arguments about this point!

READ ARTICLE:   What is a Scream Test

I would love to hear opinions in the comments below.

Share this knowledge