Simplifying SAN management for VMware Boot from SAN, utilizing Cisco UCS and Palo

One of the great features of the Cisco UCS is the Palo or Virtual Interface Card (VIC). When utilizing this card with UCS, it allows the administrator to create many virtual NICs (vNICs) and virtual HBAs (vHBAs) (up to 128 with some limitations). In a VMware environment, the use of vNICs is well understood — you can create individual vNICs for service console, vMotion, VM network traffic, IP storage traffic, and so on. You can then apply QoS policies to them to guarantee service levels. Additionally, you have the ability to utilize dynamic vNICs and Pass-Through-Switching which bypasses VMware’s vSwitch and dynamically assigns vNICs to VMs as they are created. The benefits to creating vNICs is clear, but how about vHBAs?

At first glance, it doesn’t seem that useful to create more than 2 vHBAs (one per SAN fabric); and after all this is something that you can do with the standard UCS mezzanine cards from Qlogic and Emulex. There is one use case where the ability to create more than two vHBAs comes in handy — that is boot from SAN in VMware environments. This applies equally to boot from SAN servers in other clustered environments, but I will be using VMware to illustrate this design option, with EMC’s midrange Clariion/VNX storage.

Until somewhat recently, most customers were utilizing local drives to house their vSphere installations (and many still are). However, with the advent of stateless computing offered by Cisco UCS, boot from SAN becomes an attractive design due the ability to utilize service profiles to provision blades on the fly, and on blade failure. By not housing any data on the local drives, provisioning new blades or replacing a faulty blade simply becomes a matter of moving around a service profile.

When provisioning storage on a Clariion/VNX for a VMware environment utilizing local boot, the most common way is to create one storage group for the cluster and place all the LUNs required into that storage group. A storage group is a collection of initiators (host HBAs) and LUNs This is known as LUN masking, and allows all the vSphere hosts in the cluster access to the storage. It is very important for all vSphere hosts in a cluster to have access to all of the LUNs so that if a VM is moved via vMotion, or DRS from one host to the other, it continues to have access to its data. For this reason, it is very convenient to create one storage group per cluster (instead of one storage group per vSphere host), thus ensuring all hosts will have access to all the appropriate storage.

The problem we run into is during boot from SAN designs. In this scenario, we cannot simply create one storage group and place all the LUNs into that storage group. The reason for this is because the boot LUNs are specific to a particular host, and thus should ONLY be accessed by that host. The Clariion/VNX allows multiple hosts to exist in a single storage group, but a host can only be in one storage group at a time. Traditionally this has meant one must revert back to creating one storage group per ESX host; containing that host’s boot LUN as well as all of the VMFS/RDM data LUNs. Here is an example:

traditional_vsphere

traditional_vsphere

Above are a pair of vSphere hosts which are booting from the SAN deployed in a traditional manner; each has 2 HBAs.

Here is how it looks in Unisphere (Clariion/VNX management):

traditional_bfs

traditional_bfs

This is the storage group view. As can be seen, there is a storage group created for each of the vSphere hosts, and the storage group contains the Boot LUN for that host as well as all the VMFS/RDM LUNs for the VMs.

The issue with this is at scale. With 2 hosts, it does not seem like much of a problem. But how about if there were 16 hosts in the cluster? Each time a host is provisioned, one must be VERY careful in ensuring that ALL of the data LUNs are in each host’s storage group. Additionally, if a LUN is added to the cluster, it has to be added 16 (or more) times, once to each individual storage group. And if a LUN is removed, it again has to be done on each individual storage group. As the number of vSphere hosts grows, this becomes a very cumbersome and error prone way to manage the storage. Miss one LUN in one storage group, and it could spell disaster when a VM on that LUN vMotions to that host! It will immediately die.

Utilizing Cisco UCS and the Palo/VIC card, we can resolve this problem and go back to managing a single storage group for the entire cluster’s VMFS/RDM LUNs.

First we create 4 vHBAs in UCS:

ucs_4vHBA

ucs_4vHBA

The idea will be to utilize vHBA0/vHBA1 for booting (zoned properly to the SP ports), and vHBA1/vHBA2 for the data LUNs. We can create a storage group for each vSphere host and house the boot LUN there, containing that host’s vHBA0/vHBA1, and then a “cluster storage group” and containing the entire cluster of the hosts’ vHBA1/vHBA2 initiators.

Here is how it looks in VMware:

vsphere_palo_1

vsphere_palo_1

The first two vHBAs have access to ONLY the boot LUN, as you can see above.

vsphere_palo_2

vsphere_palo_2

The second two vHBAs have access to the VMFS/RDM LUNs for the VMs.

Now, by doing this, we are able to manage ALL of the data LUNs from a single storage group, just like the local boot scenario! No more having to update 16+ storage groups each time a LUN configuration change is required, and provisioning new hosts also becomes much simpler: create a storage group for that host’s boot LUN and utilize the existing storage group for its data/VMFS/RDM LUNs.

unisphere_palo_1

unisphere_palo_1

Storage group for the host’s boot LUN depicted above.

Cluster storage group for the VMFS/RDM LUNs for VMs:

unisphere_palo_2

unisphere_palo_2

We are back to an easy to manage vSphere cluster storage, thanks to Cisco UCS and Palo/VIC! When we want to make changes to the LUNs we just modify one storage group. And when we provision new servers, we simply add the appropriate HBAs to the cluster storage group and we are good to go, just like in the local boot case.



Categories: Cisco, EMC, storage, UCS

3 replies

  1. I assume this requires manual hba registration in the VNX, creating two separate hosts to assign to the storage groups. Doesn’t a vSphere host automatically register all its connections on the VNX as a single host? Would you have to somehow disable the automatic registration in this scenario?

  2. This might have an obvious answer, so forgive me for asking, but why not have overlapping “zones” or masks on the disk array. Set one has the HBA’s for each ESXi server (named for that server) Set two has a generic name for the ESX Cluster who’s datastores will be shared to it. Put all ESXi hosts in the cluster one, and only one each in the new one. So adding a new ESXi server is simply create a new zone (Fabric side) and mask for that server’s boot volume and add its WWNs to the common cluster group. This method is pretty much what anyone building HA clusters (HACMP/MCSC/SG/etc) has been doing for a long time and it can be applied to VMware just as well. What am I missing? I don’t see it any easier to provision 4 vHBA’s and use them to accomplish the same task. Or is this a case of VNX features making what I suggest impractical?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,098 other followers

%d bloggers like this: