SDDC drives mix and match hardware
If you build a cluster for a VMware virtualised infrastructure, you can use almost any compute resource that you have available. Do you want to mix a Dell R710 dual CPU host with 32 GB of RAM with an IBM BladeCenter Hx5 with a single CPU and 96GB – sure, go ahead. All the memory and CPU resources are added to a pool, where the configuration of each VM is a property of the VM itself – the RAM allocated (including reservations, shares and limits), virtual CPUs assigned, virtual disks and networks. It is no longer critical to stick to the same manufacturer of hardware, instead you can simply purchase the model based on it’s capabilities and cost – “bang for the buck”.
Ten years ago, companies would select one hardware vendor and stay with them. The vendor was selected based on the quality and consistency of drivers, hardware feature set, management tools, implementation and deployment tools, drive caddy form factor, skills of administrative staff and then finally cost. The difference between manufacturers was enough to make the decision to stay with the selected vendor.
For a server running ESXi, most of these selection criteria have gone away. There is no longer a requirement for local disks and complex deployment tools (boot from SAN, boot from SD or USB, VMware AutoDeploy) hardware management and monitoring tools are [mostly] integrated into vCenter or vC Ops, and it largely consumption does not matter what NIC / HBA / RAID controller you have (as long as it is on the VMware HCL!) The selection of host server vendor can now be changed more freely, and added to the pool of resources and consumed based on software defined policies such as DRS.
It is common practice for a large physical server that was once running a Tier 1 workload (such as a large SQL server) to be virtualised, and then the physical resources to be added to the pool of the existing VMware cluster. When that physical server is added to the cluster, EVC is sometimes needed to ensure that advanced CPU features are hidden from VMs so that vMotion can move them on to older CPUs – and this mostly does not affect performance.
But this has not been the case with network switches and routers. Until now. There were valid reasons why network hardware was selected to be purchased from one vendor – and that includes interoperability, consistency of configuration and a shared command set. A network switch is selected not just for it’s port density and cost, but also for the features and capabilities offered – in the software of the switch. And that is an important point – the switches run software, and this software needs to be updated on each device to enable new features and options. Yes, there are ways to interconnect switches from different vendors and have a shared level of capability, but this becomes complex and easily broken. Add in the complexity of security and firewalls, and the selection of network hardware vendor often drives (or limits) capabilities that are implemented. This can be due to switch licensing costs, the requirement to update multiple devices to enable features as well as maintaining security at all levels.
When the networking parameters and security is defined at the VM level, it becomes easier to maintain settings and consistency when the underlying hardware changes. When network features and capabilities are defined at the VM level, it becomes less relevant what the features of the physical switches are. It is possible to see that with the Software Defined Data Centre, network hardware changes from a capability enabling device into for a pool of resources – simply a transport for network packets.
Only time will tell if companies will start to branch out into mix-and-match networking hardware, changing from purchasing from one vendor based on features/capabilities, to instead purchase based on port density and latency for price – “bang for the buck”.
I originally posted this content on http://blogs.vmware.com/tam/2013/09/software-defined-networking-drives-commodity-network-switches.html