Recoverpoint 4.0 design considerations with vRPA (virtual RPA)

With the release of Recoverpoint 4.0, one of the most popular enhancements is the option to deploy the solution using virtual RPAs. Previous to Recoverpoint 4.0, the only option was to use physical appliances. We are now able to run the RPAs as VMs inside ESX and forgo the need for physical appliances. The vRPAs are free and carry no additional software cost and deployment is super easy via OVF. This now makes Recoverpoint a 100% software based solution (excluding the physical storage array of course). I am a huge fan of the vRPA approach for many reasons: flexible deployment options as a customer can scale their environment up as needed at no additional cost, ease of installation as it doesn’t get any easier than simply clicking on an OVF file, and lastly being able to say you have Software Defined Data Protection ( this is a joke BTW! 🙂 ). There are however several considerations around designing a solution around vRPAs that should be kept in mind.

First and foremost, performance is of the utmost importance in a Recoverpoint design. It is imperative that the RPAs themselves, weather physical or virtual, be able to handle the processing of the IO for CDP or CRR data flows. Physical RPAs have their own sizing considerations, but the PER vRPA sizing is as follows:

  • for up to 50MB/sec & 4500 write IOPS – 2vCPU & 4GB of RAM and 1Gb of combined iSCSI bandwidth
  • for up to 65MB/sec & 6500 write IOPS – 4vCPU & 8GB of RAM and 8Gb of combined iSCSI bandwidth
  • fo up to 80MB/sec & 10000 write IOPS – 8vCPU & 8GB of RAM and 16Gb of combined iSCSI bandwidth

In each case there is 80GB of storage space required and a combined 1Gb of LAN+WAN bandwidth required per vRPA. The minimum deployment would be 2 vRPAs in a cluster. It is also important to note that the deduplication feature of Recoverpoint can only be utilized if the largest vRPAs (8vCPU/8GB RAM) are utilized in the cluster. One aspect of the vRPAs is deployment flexibility — one can easily resize vRPAs after the fact, and also deploy more vRPAs & join them to the cluster very quickly and easily. A sharp contrast to having to use physical appliances with fixed resources and procurement times to expand a Recoverpoint cluster.

The networking requirements and topology also warrant some investigation. In a physical RPA design, the RPAs communicate to the storage array via FC. In a vRPA design the vRPAs communicate to the storage array via iSCSI. There are 4 networks required for the vRPA to communicate:

  • LAN (this is used for management communication)
  • WAN (this is used for replication communications to peer RPAs at a remote site for CRR/CLR)
  • iSCSI-A
  • iSCSI-B

Below is a diagram depicting the dataflow in a vRPA environment. This is a logical diagram, and the RED label in parenthesis on each step denotes which vRPA network is utilzed:

vRPA_data_flow

If using traditional vSwitches, each network should be on its own vSwitch. If using a distributed vSwitch of some kind (VDS, Nexus 1000V or the upcoming VMware NSX) each of the networks should be on their own uplink port. In heavier environments it would be advisable to connect the iSCSI network to its own set of physical switches, or at the very least, utilize VLAN separation on a shared switch infrastructure. As we can see from the iSCSI requirements above, in heavier environments the bandwidth requirements can be significant, so while not absolutely mandatory, it is advisable to use 10GbE for the iSCSI network. If it is not possible to have the networks physically separated, it is advisable to use NIOC or other QoS mechanisms to prioritize network bandwidth accordingly.

The vRPAs require vCenter 5.0 or higher and ESXi 5.0 or higher. Currently, the vRPA design is only compatible with RP/SE which means VNX only designs (as of Recoverpoint 4.0 SP1 it will be compatible with RP/EX and RP/CL). The VNX array requires VNX OE R32 MR1 SP1 or higher code level, with atleast 1 iSCSI port per storage processor — they can be 1Gb or 10Gb iSCSI SLICs, although 10Gb would be highly desirable in smaller designs and required in the higher performant designs. The iSCSI ports do NOT need to be dedicated to Recoverpoint traffic.

From a resource management perspective, its imperative that the resources for the vRPAs be guaranteed to them to avoid any performance related problems which can then lead to increased RTO/RPO times or data protection failures. Resource pools within vCenter should be utilized to reserve the resources for the vRPAs. And unless there is a very good reason, each vSphere host should only host 1 vRPA. The majority of the Recoverpoint designs are going to leverage 2 or 3 vRPAs and since most customers will have a minimum of 2-3 hosts in their vSphere cluster, this should not post a problem. Host affinity can be leveraged prevent multiple vRPAs from running on the same vSphere host.

One caveat to note is that distributed consistency groups are not supported with vRPAs. If you need to utilize distributed consistency groups, you must design the solution utilizing physical appliances.

There are however flexible deployment options with the Recoverpoint appliances. For example in a 2-site Recoverpoint CRR solution, site A can consist of a pair of vRPAs and site B can consist of a pair of physical RPAs. The mixing of physical and virtual RPAs is a supported configuration as long as each Recoverpoint cluster itself is homogenous (all vRPAs or all physical RPAs).

Overall I am a big fan of the vRPA approach. Generally speaking, everything will move towards a virtual appliance based option for deployment (storage, networking, etc), so its important that we get comfortable around the resource usage and design considerations when forgoing physical appliances for some of these tasks.



Categories: DR, EMC, storage

4 replies

  1. good read.. thanks @vtwindude

  2. Vijay,

    Great post as always. Looking for a little design info and of course the first link in the search was your blog. Imagine that.

    Thanks,

    Dave

  3. Hi Vijay,

    Enjoyed reading this. From the description – looks like using vRPAs with VPLEX to replicate Non-EMC storage (Hitach, Dell, IBM – encapsulated behind a VPLEX) is a no go?

    Have you heard anything on getting away from the iscsi requirement?

    Shane

Trackbacks

  1. RecoverPoint 4.0 vRPA | storagegene

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: