VMAX on a Clariion Planet, Part2: storage layout and provisioning

In part2 of this series, we’ll take a look at the storage layout and provisioning basics comparison between VMAX and Clariion.

First a look at how storage is composed on the two arrays.

On a Clariion (or VNX block) you have the following components that make up the storage:  physical disks, a RAID group, and finally a LUN. You form a RAID group out of a set of disks, and then assign a portion of that RAID group as a LUN to a host:

Clariion LUN

Clariion LUN

A set of physical disks are selected and a RAID group is formed; in this example a RAID5 3+1 group from 4 disks. Then a portion of that raid group is allocated to a LUN, which is presented to the host. The LUN can utilize any portion disks in the RAID group and is only restricted by the size of the LUN itself — meaning, it can consume at most its own size in the RAID group, but its RAID stripes can live anywhere on those disks. If desired, a MetaLUN can be formed, which combines the disk space of more than one LUN to create a larger LUN.

A Symmetrix VMAX composes storage to a host in a different manner: physical disks are sliced into hypervolumes, those hypervolumes are assigned a protection type, which forms a Symmetrix Device that can be presented to a host.

Symm Device

Symm Device

An interesting difference between storage allocations on a Clariion/VNX and a VMAX is that on a Clariion you specify the exact RAID group and physical disks that form a LUN; on a VMAX, when creating a device, you simply specify the size and protection level, and the array automatically chooses the back-end hypers and physical disks (within a specified disk group). It does this via an algorithm based on back-end I/O statistics and other variables to ensure storage layout is balanced on the array upon allocation. A program called Symmetrix Optimizer can run then on the array to monitor the activity of each hypervolume, and move hypers to different physical disks based on I/O patterns, hot spots, and service times thereby completely automating array I/O balancing on the back-end.

Here is an example of a Symm Device:

Symm Dev

Symm Dev

The above shows the properties of Symmetrix Device 0104: RAID-5 protected, and the capacity is 8531MB.

Now lets have a look at what hypers compose this device:

Symm Device Hypers

Symm Device Hypers

Here we can see that the device is a RAID5 device with R5 3+1 protection, and the underlying physical disks, which back-end DAs they are connected through,  and hypervolumes “slices” from those physical disks which compose it. If Symmetrix Optimizer is enabled on the array, it is very possible the hypervolume’s listed above can change based due to the I/O load balancing algorithms.

Once a LUN/device is formed, how about presenting it to the host? There are some differences in terminology that one should be aware of.

A Clariion uses the concept of storage groups. A storage group is a collection of hosts and LUNs. Below is a screenshot of storage presentation on a Clariion/VNX’s Unisphere management interface:

Unisphere Storage Group LUNs

Unisphere Storage Group LUNs

Unisphere Storage Group Hosts

Unisphere Storage Group Hosts

Once the above hosts are added to the storage group, they will be able to access the LUNs contained in the storage group which are illustrated in the first screenshot.

The VMAX uses the concept of a masking view. A masking view is composed of a storage group, a port group and an initiator group:

A storage group is a group of devices that will be accessed together:

VMAX Storage Group

VMAX Storage Group

A Port Group is a set of FA ports that will be accessed together:

VMAX port group

VMAX port group

An initiator group is a group of host HBA WWNs (can be from a single host, or a cluster in the case of something like VMware):

VMAX Initiator groups

VMAX Initiator groups

A Masking View is the union of a Storage Group, a Port Group and an Initiator Group. It will allow all the host HBAs in the Initiator Group, Access all the devices in the Storage Group through the FA ports specified in the Port Group:

VMAX Masking View

VMAX Masking View

In part3, we’ll compare storage pools and thin provisioning in Clariion and VMAX.



Categories: EMC, storage, VMAX

4 replies

  1. Thanks for the detailed explanation. Waiting for Part 3!!!!

  2. Where do the Thin Pools and Disk Groups fit into the scheme of things? Are Disk Groups an equivalent of RAID groups?

    • Hi Roycey,

      There’s no any concept called as Disk Groups in Storage Array. We have only RAID Groups which are equivalent to Pool. In a RAID Group we have similar types & technology (homogenous) disks due to performance considerations & in Pool we can have heterogeneous disks for better space utilization & lesser I/O loaded applications/servers. If you refer a bunch of Disk in an array that is just nothing but a collection of hard disks.

  3. Are hypers created by EMC support engineers when new disks are introduced to the system? If so then what guidelines do they follow in terms of number/sizes of hypers?

    What are the advantages of using hypers vs RAID10 or 50? It looks like the same striping technique to distribute I/Os between multiple disk spindles. I am just guessing that may be it was EMC’s proprietary way to load balance I/Os back in the days, and then they just left it in the microcode?

    What additional reading would you recommend of this subject (very little info is available on this topic, even in official EMC training courses)?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: