FEX, or Fabric Extender technology is a core part of Cisco’s DC strategy. There are multiple marketing FEX terms that mean different things, and I’ve seen much confusion from customers & peers alike regarding these terms. There are four main FEX terms: ToR-FEX (also called “Rack-FEX”), Blade-FEX, Adapter-FEX and finally VM-FEX.
Before continuing, it would be helpful to get a background on what FEX actually is… read about FEX here.
This describes utilizing Nexus 2K FEX at the top of each rack, connected to Nexus 5K/7K upstream. The server adapter port connects to the FEX and the port shows up on the upstream switch as if it was directly connected to it; the FEX is a virtual line card in the switch, extending the fabric.
So, ToR-FEX/”Rack-FEX” = Nexus 5K/7K + Nexus 2K:
In the UCS chassis, there are a pair of IOMs that handle the communication from the blades to the fabric interconnects; these IOM provide very similar FEX capability that is found in the Nexus 2K. The I/O flows from the blade mezz card, through the chassis backplane, to the IOM (FEX), and from there to the fabric interconnects at the top of the rack. Very similarly to the ToR-FEX/”Rack-FEX”, the IOM extends the fabric and the adapters on the blades show up on the fabric interconnects as vethernet interfaces, as if the IOM was a line card in the fabric interconnects themselves.
So Blade-FEX = UCS 6K (61xx/62xx) + UCS 2K:
A very similar logical diagram to the rack fex, except in a blade chassis. The UCS 2K is contained in the chassis, and the blades have a back plane connection to the FEX (IOM) instead of a wire. One logical (VPC) connection (supported with 62xx HW) extends the fabric up to the 6K, and logging into the 6K you can see the individual ethernet and fc interfaces of the blades.
The term Adapter-FEX is used to describe the the act of virtualizing a physical adapter on a server (blade or rack) and having those virtualized adapters appear to the upstream Nexus switch as if they are physically connected to it. Hence the “fabric extension” is happening from the adapter to the upstream switch, hence the term “Adapter-FEX”.
Now, there are two variants of Adapter-FEX — Adapter-FEX blade, and Adapter-FEX rack, applying to Cisco’s B-series (blade) and C-series (rack mount) servers equipped with the VIC:
So Adapter-FEX rack = VIC card + Nexus5K (one possibility, other combos are possible):
Adapter-FEX Blade = VIC card + UCS2k + UCS6k:
VM-FEX is built on top of Adapter-FEX and is the ability to have control plane integration between the vSphere networking layer and the server networking What do we mean by that?
There are two types of virtual interfaces: static and dynamic. Static vNICs are what an vSphere administrator would create (for service console, vmotion, etc). But as virtual machines are created, a dynamic vNIC is also created by UCSM and associated with the proper port group. This vNIC also shows up in the upstream switch as if its directly connected. So each virtual machine has a vNIC which is created and shows up on the upstream device, just like if there were a physical server plugged into a physical port. It’s all about providing a unified methodology to managing virtual & physical assets.
So, VM-FEX = Adapter-FEX + vCenter networking control plane integration via UCSM.
In other words: VIC card + UCS2k + UCS 6k + vSphere integration via UCSM (blade). The key is UCSM talking to vCenter.
The above shows the VMFEX scenario for the blade, but the concept for rack servers is identical. There is control plane integration between UCSM and vCenter such that when a new VM is created a new veth (for each vNIC) is also created automatically on the upstream device, making it seem like the VM is connected physically to it. This is in addition to any virtual adapters at the hypervisor level (such as vHBA for storage, or static vNIC for hypervisor networking).
Note: as of UCSM 2.0, VM-FEX is also supported in KVM environments.
There is an analogous rack methodology, but I don’t see it used often, and have never actually seen it implemented. Most customers I see building large VMware environments are doing so with B-series.
As we go further down the virtualization journey, these control plane integrations will become more and more prevalent, and perhaps even table stakes at some point. We have, for example, storage plug-ins for vCenter, and vCenter “awareness” in some storage GUIs, but how about more direct control plane integration for “other” storage-ish? Things that make you go…. hmmmm.
* Note: diagrams are not necessarily physical representations of full deployment scenarios. In most cases, only half the picture is displayed, there would be a second 2K, second 5K, etc.