whitebox vsphere lab

Despite having a full lab in our Tampa office consisting of two UCS chassis, Nexus 5k switches, various EMC, and Netapp storage devices, Avamar, DataDomain, etc, etc ( you get the point ), I still felt the need to have an ESX environment at home for tinkering purposes. Some of our equipment is reserved for demo purposes, and some for engineering (aka “playground”) purposes, but there is something to be said for having unadulterated access to your own equipment, and lets face it, having ESXi running at home is just cool.

It is still a work in progress, but I wanted something with a small foot print, quiet, and most importantly it needed to have all the proper CPU support to function with FT for testing purposes. For those not familiar with vSphere FT, it essentially allows a primary VM to run on one ESX host and a secondary VM to run in lock-step on another ESX to provide near-zero recovery in the case of physical ESX host (or other infrastructure) failure….

This is the current setup:

  • Chassis: Shuttle SG45H7 Intel G45 Socket 775 barebone chassis (VERY nice and compact footprint)
  • CPU: Intel Q9400 Quad Core CPU (FT support, but have not tested yet as I only have 1-node currently)
  • MEM: 8GB DDR2-800 RAM (Qty 2 2Gx2 kits, this is the max this MB will support)
  • Disk: Hitachi 1TB Deskstar SATA HD
  • NIC: Intel 8391 GT 1GB PCI based NIC (the onboard is NOT supported by ESXi)

A few comments. In retrospect, the only thing I would like is a configuration which could support more RAM. Preferably 16GB+. Unfortunately, to get the small footprint of the shuttle chassis, that had to be sacrificed as there were none that supported the higher RAM on the “whitebox HCL” as these platforms were not designed for such uses. I’ll probably look into seeing what other MBs can be put into this chassis to support more RAM.

With this setup I currently boot ESXi 4 off a USB stuck and have the entire environment running from the 1TB HD. There are several VMs running including a full Win2k8 R2 domain controller, vCenter, full View 4 environment with ~5 XP desktops, an openfiler VM, a celerra VSA, a netapp simulator and a few more misc VMs for testing purposes. Unfortunately due to the 8GB memory limit I cannot leave all of these powered on, but that’s not a problem since its just a test environment. Openfiler actually provides really good performance even virtualized. I have updated the kernel on the rPath appliance and compiled the latest vmware tools (rPath is Linux based) to utilize the vmxnet3 driver and it performs well. I intend to test pvscsi with it at some point.

My eventual plan is to purchase another platform to use as the openfiler system, remove the 1TB drive from this and add 3-4 more and use that as the shared storage appliance over NFS/iSCSI (I will use both protocols for testing purposes). As an aside the openfiler VM is currently streaming media content to my PS3, and it works flawlessly through the VM. The other change will be to purchase multi-port NIC server card to simulate a more “real” environment from a virtual switching perspective, install dVS and the Nexus 1000v. And ofcourse to add another ESXi node to the mix. All of the above can be had for under $800 or so.

Some pics:



Categories: vmware

2 replies

  1. I’ve just built a similiar ESXi box (SG45H7 + E6300).

    I’m booting off a 1 GB Compact Flash card, using a SATA adapter. I like compact flash because it’s non-mechanical and they are easy to backup & restore during failures.

    I was successfully able to use the onboard Marvell NIC using a oem.tgz (found here http://www.vm-help.com/forum/viewtopic.php?f=12&t=1670 – sky2 module, had to edit pci id’s). It seems to work fine. VLAN’s ok, and VM’s consistently reading 95-105 MB/s from their vmdk’s over NFS (Linux box with software RAID 5). Thinking about adding a Intel NIC and setting up link aggregation for more speed (Dual port NIC is out of my budget atm).

    I’m planning to put some removable 2.5″ drives in the external 3.5″ bay so it’s not completely dependant on NAS. I’d like to see if I can get it working in RAID (possibly another unsupported module ;).

    I am noticing some performance problems. It’s a little hard to describe. My old VM’s (migrated from VMware Server 2.0) are ok, but new VM’s aren’t as snappy. Still trying to work this one out. It’s not disk IO, and performance graphs don’t show any obvious cpu/memory(swapping) issues. It’s only got 2 GB of RAM at the moment, still waiting for the rest to arrive in the mail.

    Anyways, have a play with the sky2 module if you want to try and get the onboard NIC going.
    Also Shuttle reckon this box will take 16 GB ram (http://au.shuttle.com/product_detail_spec.jsp?PI=1131)

  2. G’day mate – yea I got the new Shuttle SX5J8J3 version with 16gb of RAM and 3.02ghz Intel proc. I’m using SD cards to run ESXi and complied the drivers for the Marvell 88E8058, which is the new NICS on the motherboard. I have a 16 port netgear switch and set up all my nics into that switch, with an uplink into my netgear DSL router. I’m getting massive dropped packets ranging from 25-50%, from my managment nics – can’t even configure HA. Not sure if it’s the driver doing this or something else. I’m going tounlpug one cable from each mgt port on the nic(I’ve set up teaming on them) and see if that helps or isolates the problem. Have you seen this?

    Cheers,

    Andre

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: