In the previous VSA article we took at a look at the storage architecture of the appliance, as well as some of the caveats and considerations when deploying it. In this article, we’ll take a look at how to get it up and running in a nested ESXi environment as well as some of the functions the VSA provides.
First, in order to create a nested ESXi 5.0 environment, have a look at this great article.
When creating your environment, my recommendation is to create 4 individual vDS port groups or individual standard 4 vSwitches for the environment. You will assign each to a vNIC of the vESXi host to simulate connecting each pNIC to a physical switch in a real deployment.
Be sure to configure the vSwitches (or vDS port groups) with promiscuous mode enabled and create 2 vESXi VMs with 4 NICs minimum and a SINGLE VMFS volume (this is important or else the VSA will not install). I recommend a thin provisioned volume of about 200GB for testing.
You should end up with something like this:
Now you need a Windows based vCenter 5.0 instance to manage this environment. Install the VSA manager software onto that vCenter which will then expose the VSA manager plug-in/tab on the vCenter client once you click on a vSphere data center:
In normal installations, you would then click on the VSA manager tab and follow the instructions to install. The problem is that since we are installing in a nested vESXi environment, “EVC” does not work with nested vESXi and is a requirement the installer checks for, thus you will not be able to proceed:
Thus far I have not been able to find a workaround for this for the GUI based install. However, after lots of lab time I found there is a way around this problem: in order to install the VSA in nested ESXi and bypass the EVC requirement, we need to tweak a configuration file and then do the installation via command line. Download the full zipfile which includes the command line installer if you haven’t already and unzip that onto your system.
Here is the minimum syntax to get it going:
install.exe -u root -p <password_to_ESX_hosts> -si <start_address_for_VSA_front_end_IPs> -nh
Recall that the VSA has a front-end network and a back-end network. The “-si” switch tells it what public IPs to use for the front-end. You can specify a “-bs” start range for the back-end IPs, but it will default to 192.168.0.1 as the start range if you do not specify anything. You can also specify netmasks and VLANs. See the manual for details.
The “-nh” tells it not to join the hosts into a high availability cluster and this will be important to help bypass the EVC check. If we execute this command this will be the result:
As you can see the automated command line installer runs an audit stage and it fails for the same EVC reason!
Well, after much lab time, I figured a way around this problem. We need to change a parameter in C:\Program Files\VMware\Infrastructure\tomcat\webapps\VSAManager\WEB-INF\classes\dev.properties. Search for this line:
and change it to
This will effectively bypass the audit check for EVC. Cool huh?
Now re-run the install.exe command, and it should complete:
And you end up with this in your nested ESXi environment:
The result is 2x 100GB data stores, which correlates with each VSA having 200GB of RAW storage, for a total of 400GB RAW or 200GB usable after RAID10 internal to the VSA.
Here is a peak at the networking the VSA installer sets up:
There are front-end and back-end port groups that live on separate vSwitches and pNICs. You are now free to customize the networking however you see fit, but it HAS to have a default configuration starting out or else the install WILL fail.
Now that the VSA is installed, you can continue to manage it through the VSA plug-in in vCenter. We only needed to do the hack and command line to get it up and running. Again, its important to note this would not be required in a real installation, it was required due to the limitations of nested ESXi.
In the next article, we’ll take a look at some administrative tasks, and testing out some of the failure scenarios and how the VSA handles it from a downtime/uptime/reliability perspective.