A quick look at server vs traditional blade vs UCS wiring

This is just a quick post illustrating the wiring simplicity of the UCS when compared to traditional rack / blade servers…

First up is your traditional rack with Dell 1RU servers. Note the separate LAN/SAN connections per “compute unit.” Not all of servers are wired up here, but you can imagine the mess if they were.

Next up, a Dell Blade chassis. Things get a little better, but still, quite a bit of wiring as the LAN / SAN connections from each chassis require separate physical cabling. In this picture there are only 2 SAN connections (really 1 if you consider redundancy). Imagine this with 2-4 more SAN connections to give the type of connectivity you would need in a production environment and potentially more LAN connections to meet throughput needs. Also keep in mind, each chassis deployed in this manner will require all of these connections. It can get hairy when there are multi-chassis configurations. If you needed more SAN bandwidth you would need to write up more FC connections. If you need more LAN bandwidth, you would need more ethernet connections. You would need to know these requirements up front, or take some educated guesses. Calculating wrong here could create a real cabling nightmare in the future if FC ports vs LAN ports are not available in the network. Also worth noting are the separate management interface cables.

Finally, we have the UCS. Note the cabling simplicity. The 4 black connections from each FEX carries unified I/O to the top of rack for distribution into the core. 8 connections per chassis = 80Gb throughput per chassis for you to divvy up as you desire. Need 50Gb of LAN and 30Gb of SAN? No problem, utilize the the UCS manager and configure via service profiles. Need 40Gb LAN and 40Gb SAN? Again, no cabling changes or adding connections, simply do it in software via the UCS manager. Utilizing the Palo CNAs in the blades gives you even more flexibility which is unparalleled with other systems. This is truly wire once, use/change as you see fit technology. Would you rather balance your LAN / SAN bandwidth requirements via physical cabling, or using QoS in protocols? Much more flexibility in this design. And do you notice cabling missing for “management”? That’s right, no dedicated management cables needed per chassis, all handled via the 4 black connections per FEX to an internal management network. Another area of simplicity.

I get a lot of questions as to what the wiring simplicity actually looks like on a real kit, so I thought I would make this quick post to see it in the wild.



Categories: UCS

5 replies

  1. Hi, a simple, clean and very useful post, thank You!

  2. I know its months since you posted this, but this is what a full rack of cleanly cabled UCS looks like: http://itvirtuality.wordpress.com/2010/07/05/week-one-of-cisco-ucs-implementation-complete/

    Adam

  3. I know similar clean HP C7000 VirtualFabric implementations … this Dell stuff looks really awful – the interconnects are just between some Fans. UCS is still cool (bandwith scales better) – but i dont see the “big” thing for average datacenters to need UCS complexity below their hypervisor – they not even deal right with the traditional stuff.

Trackbacks

  1. Rootserver

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: