VMAX on a Clariion Planet, Part1: A look at architecture and IO flows

This article is focuses on understanding VMAX from the perspective of users who are familiar with Clariion arrays, terminology and architecture. Put another way, a guide to VMAX for Clariion users. We’ll take a look at the architecture similarities/differences, terminology and a look at basic storage administrative tasks. When Clariion is mentioned in this article, it applies equally to VNX arrays as well, as they are similar for the purposes of this article.

Part1 will focus on architecture and IO flows, and Part2 will discuss some storage design and provisioning concepts.

With that said, lets examine I/O flow from the host to a back-end disk of each array type.

Clariion I/O Flow

The above is a representation of the I/O flow from the host to a Clariion or VNX array. In this diagram there are two service processors, cache within the SPs, and a back-end disk enclosure with some disks, and a sample LUN. The service processors provide front-end connectivity to the hosts (perhaps through switches), cache for data, as well as back-end connectivity to the physical disks. In this dual controller architecture, a LUN is “owned” by either SPA/SPB. What that means is that the host can only access that LUN through the front-end ports on that SP, unless there is a fail over and that LUN ownership is “transferred” or trespassed to the other SP. In this example, the LUN is “owned” by SPA. As such the I/O flows accordingly:

  1. Host makes an I/O request to the active owner, SPA
  2. SPA checks its cache. If this is a READ request with a cache hit or a write, the I/O will be served directly from/to the cache. If this is a read cache miss, the I/O proceeds to step 3
  3. Access is made to the physical disk (or disks) which actually contain the blocks which are being requested. Of note is that ALL the physical disk(s) are accessible through either SP, but the access to data on a particular LUN is through the owning SP. This is what makes it an active/passive system.

Of mention, is a mode of access referred to a ALUA (Asymmetric Logical Unit Access). This allows a host to send requests to SPB for a LUN owned by SPA. On the surface this may seem like it transforms the array into an active/active system, however access to the LUN through SPB is considered a “suboptimal” path. This is because the I/O is not actually served by SPB, but rather it is sent to SPA over a CMI link (the dual arrows connecting the SPs in the diagram), and then the I/O flow remains the same as above. Once the data is fetched, it is then transferred over the CMI link again to SPB, and then to the host. It is meant to mitigate certain failure conditions conditions, not for true active/active access due to the incurred performance penalty.

Observation #1a: In order to scale this system for more (non-spindle bound) performance, the SPs have to be upgraded to larger/more powerful units. You cannot add a 3rd SP for example to gain additional processing power. Performing an SP upgrade is a data-in-place upgrade, but requires downtime. This is OK for some environments, but not for others.

Observation #2a: If you need to add more paths to a device beyond what is available from the front-end port count on a single SP, you are out of luck. You cannot spread the I/O for a particular LUN among both SPs. Your only choice is to upgrade the SPs, or to create smaller devices, assigning some to SPA, and some to SPB. This assumes you have a logical volume manager running on the host that can then combine these back into a larger device. In the case of VMware, you have the option of utilize “extents”, but for many reasons, it won’t be an optimal solution; the reasons are beyond the scope of this article.

Next up, let’s have a look at the architecture of a VMAX.

VMAX engine

VMAX engine

Unlike the dual controller architecture of a Clariion or VNX, the VMAX is made up building blocks referred to as “engines”, and can scale from 1-8 engines. The above illustrates the internals of a single engine, with each engine containing two directors. The left half of the engine represents one director, and the right half another; a director is analogous to a SP in the Clariion world. Contained within a director are some components which are familiar to a Clariion: front-end ports, back-end ports, and cache. However, the cache in this case is a global cache, meaning it is shared among all the engines (and directors) as one big pool of (mirrored) memory. Another new component is the Virtual Matrix Interface. This is the interconnect by which VMAX engines communicate with each other. Because this is an active/active system with a shared global cache, any engine (and director) can access any LUN simultaneously, which is different from the Clariion architecture in which a LUN can only be accessed by a single controller at a time.

Here is the look at an example I/O flow for a single engine VMAX.

VMAX single engine IO flow

VMAX single engine IO flow

This is a very simple example where a single engine is directly connected to the physical disks which contain the blocks being requested on a LUN. The assumption is that the host has connectivity to front-end ports on both directors through the SAN. The steps 1-3 are very similar to that of the Clariion example, with the notable exception that I/O can be serviced by both directors (analogous to controllers in the Clariion world) simultaneously for the LUN. The red broken separation of the global cache is to indicate that there are separate cache modules in each director, however they are shared like a global pool. In single engine configurations, the cache is mirrored between directors; in multi-engine configurations, the cache is mirrored between engines.

Observation #1b: The ability to access any LUN from any engine, and add engines instead of needing to upgrade them like a Clariion/VNX controller is what makes this system a scale out architecture for block storage. It provides enormous opportunities for scaling as you can simply add engines as you need to scale performance in addition to adding disk. It also provides higher resiliency for the same reason.

Observation #2b: Because any LUN can be accessed by engine simultaneously, one can scale the performance for a single LUN to incredible levels. It would be theoretically possible the have connectivity to a host such that it had paths to a LUN from every single engine and by utilizing a multi-pathing software like PowerPath, drive I/O to all those paths simultaneously!

The first VMAX I/O flow example was a simplistic case because it was a single engine with all the disks attached to it. How about when there are multiple engines, and you are driving I/O to a LUN through all of them and the physical disk containing the data in question is in a remote engine? How does that work?

VMAX multi engine IO flow READ CACHE MISS

VMAX multi engine IO flow READ CACHE MISS

The above is a pretty busy diagram, so it warrants some explaining.

The virtual matrix fabric is the interconnect used by all the VMAX engines; it is how they communicate. The actual interconnect technology itself is Rapid IO, and you can read more about it here.

You will also notice two sub-parts in the cache component: GM is global memory, and SF is the store & forward buffer. Their uses will become clear as the IO sequencing is explained.

To frame the discussion for the IO flow depicted above: it is an example of a read cache miss with a host connected Engine 3/Director 4, the physical disks containing the data connected to Engine 1/Director 1 and the cache slot for that particular data on Engine2/Director3.

The host does a read for some data on Engine 3/Director 4; the VMAX cannot serve the data from its local cache because it is a CACHE MISS, and the following occurs…

1- Since this is a read cache miss, the data has to be retrieved from the disk. The data is read from the disk into the SF (store & forward) buffer of Engine1/Director1’s cache. The SF buffer is used for situations where data needs to be temporarily stored and moved to another director/engine, such as in this example. It is a separate region of the cache, not shared with the GM (global memory) which is used for general purpose cache storage.

2- Through the use of the Virtual Matrix, the data is moved from the SF buffer of Engine1/Director1 to the GM of Engine2/Director3 because, in this example, it is where the cache slot for this data resides. Subsequent reads of this data can then be served from the cache.

3- The data is then moved to the SF buffer of Engine3/Dir4 where the host connectivity resides.

4- The data is moved from the SF buffer of Engine3/Dir4 to to front-end ports and to finally to the host.

This is probably the most complicated read example there is- the host connectivity, cache slot, physical disk access are all on separate directors, and it is a read miss. Although there are individual cache components in each director, through the use of the Virtual Matrix, the cache is treated as one big pool of memory allowing the data to be accessed from any director and engine. You can imagine how complicated the IO flows must be with the host performing IO to multiple engines, directors and disks; you develop a respect for how difficult good caching algorithms can be to design and implement.

In Part2, we will look at some storage design / provisioning concepts, and how the VMAX compares to the Clariion in that regard.



Categories: EMC, storage, VMAX

7 replies

  1. Nice article, but differ on following points.

    CX
    1.Observation #1a – We cannot add third SP to any of the Clariion storage models. Only possible options are upgrading SPs to high end models, but will always have only two SPs. We can add more Blades on Celerras depending on the model.
    2.Observation #2a – We can add more Front end I/O moddules to newer CX4 Models and can also reassign mirrorview ports. So front end ports can be added to CX SPs.

    VMAX

    Agree on many counts. But I have a doubt. If you notice the virtual matrix architecture of VMAX, every engine (dual paths) connects to a Matrix Interface Board (MIB,redundant) and each of these paths are only 2.5GB speed (which is less?). VMAX may work great if all the I/Os you need are directly on the backend disks of Director you zoned to. But if the I/O had to be processed across directors of different engines then that I/O passes through the MIB and then you will have the bottleneck I guess. I would say VMAX can be a great storage model if they can build a 8GB Backend and 16GB MIB interconnects. Did I got it wrong? :)

  2. This is a great article, the diagrams are very clear and easy to read.
    You seem to be implying that the VNX and Clariion effectively have the same architecture.
    Just curious about the CMI link:
    a) What is CMI an abbreviation for?
    b) How ‘fast’ is the CMI link?
    c) Is there are difference between the CMI links on VNX and Clariion?

  3. rapid io is serious bottle neck and enforces many design constraints. Director can only see its own disks which is major drawback. One engines could see the disks if matrix bandwidth was higher and each engines could access all disks.

  4. Very nice post. What about the backend disks being the bottleneck for VMAX? No amount of scale out advantages in terms of engines would help right? I have always been puzzled about the disks being a bottleneck. Does VMAX have any other feature that allows it to bypass this fundamental bottleneck? With the new VNX series, the SP hardly becomes a bottleneck these days. So, what would be the real need for a VMAX? I hope that I have stated my question in a clean and clear way for you guys to help me understand.

  5. yes now I got clear ..All dear I want to learn vnx .kindly if anyone is haveing document study. guide of emc vnx then please forward me waiting..from last one month I am searching ..I hope any one could help me waiting……for foverabl reply

  6. Very Nice! Thanks.

  7. Then HDS VSP is the only existing tightly coupled high-end architecture, which make me more appreciated to HDS. v-Max abandoned DMA method, and it’s very complicated when cahce miss, several forwarding, which must increase latency, Symmetrix is dead.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,123 other followers

%d bloggers like this: