You may know about ToR, top-of-rack (for) switches. It’s the practice of placing a physical switch within each rack, so that the network switching for the rack is close to the servers or devices that need to connect to it. The ToR switches will allow servers to communicate with each other directly, and therefore reduces overall traffic on core switching. ToR switches are often 1RU and approximately 48 ports – this would provide one uplink for each server if all 42RU of a rack is filled with 1 RU servers.

Top of Rack switches have come about as a result of increased density of servers and networking within a 42RU rack, where in the 90’s and 00’s there may be 5-15 servers in a rack – each with one or two network connections, in the second decade of 2000, there are more likely to be over 80 network links in a single rack. This is not only because servers are smaller, but also each one will have multiple links – such as bonded links, out-of-band management and virtual hosts participating in multiple networks. This increased density would eventuate in such a mass of copper cabling in each rack, that it makes sense to consolidate this in to two or three ToR switches per rack. The uplinks from ToR switches to core switches can then be 10 gigabit links utilising AOC, Twinax or bonded/trunked connections to achieve backbone speeds of over 150Gbps – far higher than possible (or needed) from a single server.

What’s wrong with ToR?

One of the aims of installing ToR switches into each rack is to reduce cabling – cables which would otherwise reduce airflow, be bulky and difficult to manage, under risk of damage (such as crushing or excessive bending caused by cables near it) and generally ugly. However, people tend to stick to the concept of “TOP” of rack religiously. The top of rack switches are installed at the top.

READ ARTICLE:   Digital Transformation is about process and people, not technology

The better option is to install the switches, and any related structured patch cabling, in the middle of the rack – around the 23-25U mark.

What this does is make the cabling consistent – instead of the server at the bottom of the rack needing a cable that is 3-4 metres long (including cable management arm), and the server at the top of the rack needing a patch cable of 0.5-1 metres, by having the switches and patching in the middle of the rack, most cabling is the same length.

row-ToR-middle

This photo shows a row of racks, where there are three switches, 1 cassette of fibre, and 1 patch panel – all in the middle of the rack. The space between each is filled with cable management. On the opposite side (front) of the rack, the space is used for a KVM screen or a shelf for non-rackable units.

row-patch-closeup2This close up shows the inside of a single rack, where the decision in the datacentre is to colour-code the cables to a switch, where green is used for OOB (out of band management, in this example – HP iLO), white is used for production data, and red is used for a storage network.

By having these components in the middle, this means only one length of cables is used – all 2 M cables, making it easier to cable up everything and less wasted stock of cables required. As this datacentre owner found out, it also makes component replacement easier. In another rack (not pictured), replacing a switch at the top of a rack needed a ladder – compounded by the fact that there was a stacking cable that needed to be accessed from the front – all the way through the rack. Replacing a switch when it is in the middle is much more accessible for a person.

READ ARTICLE:   VMware vs. AWS

The eagle-eyed of you will see from this photo on the left is that there is a server at the top of this rack that has multiple USB dongles installed – this is an ESXi host that is used for all the VMs that require dongles. Read my other post about USB and vMotion.

If you look really closely, you can see that the uplinks are just standard 1Gbps Ethernet – but this was a transitional implementation, and the datacentre owner was in the process of moving to a 10Gbps backbone between racks. In the pictured topology, replacing the data (white) network with a 10GbE switch would be simpler because the connections are already aggregated to a single point.

 

So, put your top of rack switches in the middle!

Share this knowledge