Hyper-Converged with Dell-VMware EVO:RAIL

Dell-VMware EVO:RAIL Hyper-Converged Solution
There’s a lot of interesting things going on in the industry with servers, storage, and networking: SDN (software defined networking), SDS (software defined storage), NFV (network function virtualization), SD-WAN (software defined WAN), hyper-convergence, and more and more focus on private / public / hybrid cloud just to name a few. I’ve written several prior posts on VMware NSX and SDN. In this short blog, I’ll focus on hyper-convergence specifically with the Dell-VMware EVO:RAIL solution. If you haven’t already, checkout the Dell Networking Dell-EVO:RAIL Reference Architecture for more detailed information.

Dell-VMware EVO:RAIL Appliance

Dell-VMware EVO:RAIL Appliance

The Dell-VMware EVO:RAIL solution is a highly automated, hyper-converged infrastructure appliance. The EVO:RAIL solution is customized and running on reliable Dell hardware providing a hyper-converged, cost-effective virtualization platform.

The hyper-converged solution includes the industry leading VMware vSphere server virtualization platform and VMware VSAN storage virtualization technology installed on a Dell 2RU four server node hardware appliance.

Dell-VMware EVO:RAIL Back View

Dell-VMware EVO:RAIL Back View

The Dell EVO:RAIL 2RU chassis has 2 x 2.10 GHz 6 core Intel CPUs per node, 192 GB of physical memory per node, 2 x 10GbE connectivity per server node, and 13.1 TB of usable VSAN storage per appliance. It can scale up to eight appliances combined for a total of tirty-two server nodes and 104.8 TB usable VSAN storage!

The great thing about such hyper-converged solutions is that there is consolidation of functionality in a small form factor, and you can get up and running very quickly with a automated setup. Yes, you do have to still configure your networking at the ToR switch (ex: management, vMotion, VSAN, and compute VLANs and respective gateways), but once this is done, you simply turn on the EVO:RAIL appliance within your environment, step through a wizard, and you’re done.

Further, the small form-factor makes the Dell-EVO:RAIL solution perfect for not only saving space within the data center but also for remote and branch office locations (ROBO). The ability to scale up to 8 x EVO:RAIL appliances within a cluster allows for a pay-as-you-grow model.

VMware vSphere ESXi is installed on all four nodes of the appliance. In addition, VMware vCenter Server Appliance and VMware vRealize Log Insight both run on server node 1, and VSAN is distributed across all nodes.

Each EVO:RAIL server node in the appliance is used both for compute and storage (VSAN) and has 5 x hard drives for a total of 20 hard drives per Dell-VMware EVO:RAIL appliance. Each ESXi node has 3 x 1.2 TB SAS, 1 x 300 GB SAS, and 1 x 480 GB SSD drives. The 1 x 300 GB SAS drive in drive slot 1 on each node is used as the boot drive for ESXi, the 3 x 1.2 TB SAS drives in drive slots 2-4 are used for VSAN data, and the 1 x 480 GB SSD drive in drive slot 5 is used for VSAN read/write cache.

Also, EVO:RAIL automatically starts two main daemons when booted:

  1. Loudmouth, or VMware’s Zeroconf implementation for auto-configuration (on all nodes and VCSA), and
  2. MARVIN, or the EVO:RAIL management daemon (on VCSA).

Loudmouth is VMware’s own implementation of Zeroconf, which is a set of RFC-backed technologies that leverage each other to provide automated network configuration for services and devices. VMware Loudmouth is based on Zeroconf auto-discovery capabilities and allows for the autoconfiguration of the appliance. A Zeroconf/Loudmouth daemon resides on each of the ESXi nodes and inside the VCSA instance; this allows the EVO:RAIL management engine (MARVIN) to discover all nodes and respectively automate the configuration. For more details on how this works see the Dell Networking Dell-EVO:RAIL Reference Architecture.

Dell-VMware EVO:RAIL Components

Dell-VMware EVO:RAIL Components

Best practice is to use two ToR switches connected to EVO:RAIL for high availability (HA) purposes. EVO:RAIL leverages IPv4 multicast for VSAN and IPv6 multicast for the EVO:RAIL management software for auto-configuration. The requirement is that traffic, including multicast, must be able to traverse between the ToR switches.

A VLTi port-channel is recommended as it meets the requirements and allows for upgrading of switches without downtime as would be required by stacking. VLT is Dell’s L2 multi-pathing technology that allows for active-active connectivity without network loops between a pair of switches and a server or between a pair of switches and another switch. Another option here could be to deploy stacking; up to six S4810s can be stacked to appear and be managed as one logical unit. You can read more about VLT in this prior post.

EVO-RAIL does not use any LAGs on the server. NIC teaming is specific to VMware NIC teaming options, and Route based on the originating virtual port ID is the default load balancing configured. All traffic types (management, vMotion, VSAN, and compute) use an Active/Standby configuration as shown below.

EVO:RAIL NIC Teaming Configuration

EVO:RAIL NIC Teaming Configuration

Using two Dell S4810s, my network setup would look like the below. The ToR switch could have also been any of the Dell Networking 10/40GbE switches such as S4820T, S4048-ON, S5000, S6000/S6000-ON, or N4000-series switches.

Dell-VMware EVO:RAIL Network Diagram

Dell-VMware EVO:RAIL Network Diagram

EVO:RAIL can be purchased with either 2 x 10GbE fiber port per server node or 2 x 10GbE copper ports per server node. If the EVO:RAIL appliance has copper ports, the Dell S4810 here could be replaced with the Dell S4820T (48 x 10GBASE-T + 4 x 40 GbE) or another 10GBASE-T switch such as the N4032 or N4064.

With the Dell S4810, adding the max of 8 x EVO:RAIL appliances to the cluster utilizes 64 x 10GbE ToR ports (32 x 10GbE ports per switch) connecting to the EVO:RAIL cluster as shown below.

Dell-VMware EVO:RAIL Cluster Deployment with 8 x EVO:RAIL Appliances

Dell-VMware EVO:RAIL Cluster Deployment with 8 x EVO:RAIL Appliances

Although, only 8 x EVO:RAIL appliances (32 servers) are supported in one EVO:RAIL cluster solution, the same ToR switches can be used to support multiple EVO:RAIL cluster solutions. Each EVO:RAIL cluster would be a separate unit/solution or tenant. In such a case, a higher density switch, such as the Dell S6000 (32 x 40GbE or break-out to 108 x 10GbE) can be used.

EVO:RAIL auto-discovery and deployment is not supported over L3 boundaries, meaning different EVO:RAIL appliances belonging to the same EVO:RAIL cluster cannot be spread across different subnets. As such, each cluster must be connected to the same ToR switches. In such a deployment, it would be recommended to use L3 between the ToR and aggregate switches. If a complete L2 design is used, it is important to note that different VLANs should be used for the respective clusters to provide for traffic isolation and better performance and scalability.

Network Diagram for Multiple Dell-VMware EVO:RAIL Cluster Deployments

Network Diagram for Multiple Dell-VMware EVO:RAIL Cluster Deployments

For more detailed information, see the Dell Networking Dell-EVO:RAIL Reference Architecture.

Follow me on Twitter: @Humair_Ahmed

This entry was posted in Dell, Dell Force10, Dell-VMware EVO:RAIL, Labs, Network Architecture, Networking, Servers, Software Defined Storage (SDS), Technology, Virtualization and Cloud Computing, VMware, VMware, VMware VSAN and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


× 9 = forty five