2-Node vSAN for ROBO Deployments

November 27th, 2018
2-Node vSAN for ROBO Deployments

2-Node vSAN for ROBO deployments

Deploying servers for remote offices or branch offices (ROBO) always leads to compromises.  A remote office normally would only have a few servers, so a typical setup of physical servers running VMware ESXi and a shared storage array for high availability is overkill for this scenario. 

This is where VMWare vSAN comes into play.  Although vSAN requires 3 nodes for cluster quorum, vSAN has a 2-node ROBO configuration that uses a witness appliance as the 3rd node that runs in a central datacenter which allows for minimal infrastructure at the remote site.

The vCenter for the ROBO cluster, along with the witness appliance, runs at the central datacenter. 

Minimum requirements

  • 2 hosts at remote office
  • Servers must be vSAN Ready nodes or have hardware on vSAN compatibility guide
  • vCenter at central datacenter
  • Witness appliance at central datacenter (cannot be on the remote site vSAN)
  • One or more flash disks per host for the cache tier (can also use PCI-E devices)
  • One or more disks per host for the capacity tier (can be flash or HDD)
  • Disks must be in JBOD or RAID passthrough mode

Create the ROBO cluster

In vCenter at the primary site, create the two-node cluster for the remote site and add the hosts

Enable vSAN traffic on VMKernel ports

Create vmkernel ports on the two hosts and enable vSAN traffic.  To use a direct-connection between vSAN  nodes, you must configure witness traffic separation in order to put the witness traffic on an interface other than the vSAN vmkernel.  We are going to enable witness traffic on the management vmkernel, vmk0, which must be done from the esxi CLI. The command to do this is:

esxcli vsan network ip add -i vmk0 -T=witness

For more information on witness traffic separation, see this link:

https://storagehub.vmware.com/t/vmware-vsan/vsan-stretched-cluster-2-node-guide/witness-traffic-separation-wts/

 

 

Deploy the Witness appliance

The witness appliance is delivered as an OVF package.  Deploy this at the central datacenter.

Browse to the location of the OVF file you downloaded

Give the witness VM a name and location

Select the host or cluster that will run the witness VM.

 

Accept the license agreement

Select the configuration.  Choices here are

  • Tiny – 10 VMs or fewer
  • Medium – up to 500 VMs
  • Large – more than 500 VMs

Choose the storage location for the witness VM and whether you want it thick or thin provisioned.

Select the witness and management networks

Set the password for the VM management

 

 

Once the VM is deployed, power it on and configure the networking

 

After the networking is configured, we need to add this VM as a host at the primary datacenter.  I have created a datacenter object here called “vSAN witnesses” and I’m adding the host there.  The host will have a light blue appearance in vCenter, signifying that it is a witness appliance and not an actual physical host.

Enter the hostname or IP address of the witness VM.

 

Configure vSAN storage

Begin the vSAN configuration.

Select configure two host vSAN cluster.  IF your vSAN is all-flash, you will also want to enable deduplication and compression.  Optional encryption is also available.

Verify that vSAN is enabled on the vmkernel adapters.  If not, you will need to go back and do this on the esxi hosts.

Next you will claim the disks that vSAN will use for cache and capacity.  The cache device is required to be flash storage, while the capacity drives can be either flash or HDD.

Select the witness host.  Note that one witness host is required per 2-node vSAN.  Witness hosts can only be part of one cluster at a time, so you’ll need to deploy one per remote site.

Claim disks on the witness host just as you did on the physical hosts.

Click finish to complete the vSAN configuration.  This will take a few minutes to complete.

vSAN is now up and running.

 

 

 

Conclusion

vSAN offers a low-cost option for high availability storage for remote offices.  Consider using this if you need to deploy a small number of virtual servers for a remote location.

Join the High Availability, Inc. Mailing List

Subscribe