WTF (What The FEX) are you talking about?

FEX, or Fabric Extender technology is a core part of Cisco’s DC strategy. There are multiple marketing FEX terms that mean different things, and I’ve seen much confusion from customers & peers alike regarding these terms. There are four main FEX terms: ToR-FEX (also called “Rack-FEX”), Blade-FEX, Adapter-FEX and finally VM-FEX.

Before continuing, it would be helpful to get a background on what FEX actually is… read about FEX here.

ToR-FEX (“Rack-FEX”):

This describes utilizing Nexus 2K FEX at the top of each rack, connected to Nexus 5K/7K upstream. The server adapter port connects to the FEX and the port shows up on the upstream switch as if it was directly connected to it;  the FEX is a virtual line card in the switch, extending the fabric.

So, ToR-FEX/”Rack-FEX” = Nexus 5K/7K + Nexus 2K:

1 logical (VPC) link connects the 2K to the 5K, and each of the servers appear as if they are directly plugged in as veth interfaces are created for each of the physical adapters.

Blade-FEX:

In the UCS chassis, there are a pair of IOMs that handle the communication from the blades to the fabric interconnects; these IOM provide very similar FEX capability that is found in the Nexus 2K. The I/O flows from the blade mezz card, through the chassis backplane, to the IOM (FEX), and from there to the fabric interconnects at the top of the rack. Very similarly to the ToR-FEX/”Rack-FEX”, the IOM extends the fabric and the adapters on the blades show up on the fabric interconnects as vethernet interfaces, as if the IOM was a line card in the fabric interconnects themselves.

So Blade-FEX = UCS 6K (61xx/62xx) + UCS 2K:

A very similar logical diagram to the rack fex, except in a blade chassis. The UCS 2K is contained in the chassis, and the blades have a back plane connection to the FEX (IOM) instead of a wire. One logical (VPC) connection (supported with 62xx HW) extends the fabric up to the 6K, and logging into the 6K you can see the individual ethernet and fc interfaces of the blades.

Adapter-FEX:

The term Adapter-FEX is used to describe the the act of virtualizing a physical adapter on a server (blade or rack) and having those virtualized adapters appear to the upstream Nexus switch as if they are physically connected to it. Hence the “fabric extension” is happening from the adapter to the upstream switch, hence the term “Adapter-FEX”.

Now, there are two variants of Adapter-FEX — Adapter-FEX blade, and Adapter-FEX rack, applying to Cisco’s B-series (blade) and C-series (rack mount) servers equipped with the VIC:

So Adapter-FEX rack = VIC card + Nexus5K (one possibility, other combos are possible):

OR

Adapter-FEX Blade = VIC card + UCS2k + UCS6k:

The Adapter-FEX allows each server/blade to create multiple vNIC/vHBA and have them appear on the upstream device as if they are directly connected by showing up as veth or vfc devices.

 

VM-FEX:

VM-FEX is built on top of Adapter-FEX and is the ability to have control plane integration between the vSphere networking layer and the server networking What do we mean by that?

There are two types of virtual interfaces: static and dynamic. Static vNICs are what an vSphere administrator would create (for service console, vmotion, etc). But as virtual machines are created, a dynamic vNIC is also created by UCSM and associated with the proper port group. This vNIC also shows up in the upstream switch as if its directly connected. So each virtual machine has a vNIC which is created and shows up on the upstream device, just like if there were a physical server plugged into a physical port. It’s all about providing a unified methodology to managing virtual & physical assets.

So, VM-FEX = Adapter-FEX + vCenter networking control plane integration via UCSM.

In other words: VIC card + UCS2k + UCS 6k + vSphere integration via UCSM (blade). The key is UCSM talking to vCenter.

The above shows the VMFEX scenario for the blade, but the concept for rack servers is identical. There is control plane integration between UCSM and vCenter such that when a new VM is created a new veth (for each vNIC) is also created automatically on the upstream device, making it seem like the VM is connected physically to it. This is in addition to any virtual adapters at the hypervisor level (such as vHBA for storage, or static vNIC for hypervisor networking).

Note: as of UCSM 2.0, VM-FEX is also supported in KVM environments.

There is an analogous rack methodology, but I don’t see it used often, and have never actually seen it implemented. Most customers I see building large VMware environments are doing so with B-series.

As we go further down the virtualization journey, these control plane integrations will become more and more prevalent, and perhaps even table stakes at some point. We have, for example, storage plug-ins for vCenter, and vCenter “awareness” in some storage GUIs, but how about more direct control plane integration for “other” storage-ish? Things that make you go…. hmmmm.

* Note: diagrams are not necessarily physical representations of full deployment scenarios. In most cases, only half the picture is displayed, there would be a second 2K, second 5K, etc.



Categories: Cisco, FEX

3 replies

  1. Very good info. This gave me a little more clear picture of what a VMFex is.
    You may share with me if you have detailed documentation and any tutorials/docs related to Data Center Networking .

  2. Great Info and summary. Thanks for taking the time to do it!!

  3. Hi Vijay, I must say that yours is the first truly good explanation of all these different FEX terms that Cisco has worked so hard in the past few years to confuse people with.

    I’ve read a lot of Cisco docs, taken a couple of DC courses and even watched some of the INE videos about this, but it was not until NOW that I understand the difference between VM-FEX and Adapter-FEX and also that each one is available on both, rack-mount and blade servers.

    I have one correction though. By the end of the Blade-FEX segment, you say that there’s a vPC between the IOM and the 62xx. It’s actually just a regular port-channel. There’s no peer-link between the Fabric Interconnects as there would be with a couple of N5Ks or N7Ks.

    I also wanted to ask if you could fit Nexus 1000v into all these scenarios. That would give me the whole picture of all the available options out there.

    Again, excellent article. Thanks for sharing.

    Alfonso

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: