VMware Cloud on AWS with Direct Connect: NSX Networking and vMotion to the Cloud with Demo

VMware Cloud on AWS with Direct Connect

I also published this blog post about VMware Cloud on AWS with Direct Connect: NSX Networking and vMotion to the Cloud with Demo on the VMware NSX Network Virtualization Blog on April 4, 2018. The full blog post is provided below and can also be seen on the VMware NSX Network Virtualization Blog site.

VMware NSX Network Virtualization Blog
Title: VMware Cloud on AWS with Direct Connect: NSX Networking and vMotion to the Cloud with Demo
Author: Humair Ahmed
Date Published: April 4, 2018

Check out my prior below blogs here on VMware Network Virtualization blog on how NSX is leveraged in VMware Cloud on AWS to provide all the networking and security features. These prior blogs provide a foundation that this blog post builds on. In this blog post I discuss how AWS Direct Connect can be leveraged with VMware Cloud on AWS to provide high bandwidth, low latency connectivity to a SDDC deployed in VMware Cloud on AWS. This is one of my favorite features as it provides high bandwidth, low latency connectivity from on-prem directly into the customer’s VMware Cloud on AWS VPC enabling better and consistent connectivity/performance while also enabling live migration/vMotion from on-prem to cloud! I want to to thank my colleague, Venky Deshpande, who helped with some of the details in this post.

Prior VMware Cloud on AWS with NSX Blogs:

VMware SDDC with NSX Expands to AWS

VMware Cloud on AWS with NSX: Connecting SDDCs Across Different AWS Regions

VMware Cloud on AWS with NSX: Communicating with Native AWS Resources

As mentioned in my prior blog, VMware SDDC with NSX Expands to AWS, Hybrid Linked Mode (HLM) can be configured between the vCenter on VMware Cloud on AWS and the on-prem vCenter to allow for single pane of glass management.

Figure 1: VMware Cloud on AWS HLM Setup

Figure 1: VMware Cloud on AWS HLM Setup

As shown below, this enables capabilities such as cold migration and live migration/vMotion. A user can simply click a VM in the vCenter inventory on-prem and vMotion it to the vCenter inventory in the respective SDDC in VMware Cloud on AWS or vice-versa.

Figure 2: Managing Multiple vCenters with HLM in VMware Cloud on AWS

Figure 2: Managing Multiple vCenters with HLM in VMware Cloud on AWS

For cold migration, one can simply leverage the MGW IPSEC VPN connectivity over Internet which the network file copy traffic for a cold migration traverses. For better and consistent performance AWS Direct Connect can be leveraged which also enables live migration/vMotion between on-prem and VMware Cloud on AWS. As explained on the Hybrid Migration with vMotion Checklist, live migration/vMotion between on-prem and VMware Cloud on AWS requires a sustained minimum bandwidth of 250 Mbps between source and destination vMotion VMkernel interfaces and a maximum latency of 100 ms round trip.

In the diagram below, the connectivity I’m focusing on in this blog is that on the bottom left depicting the different connectivity options from on-prem to the SDDC in VMware Cloud on AWS. Although AWS Direct Connect can provide high bandwidth, low latency connectivity directly into the VMware Cloud on AWS VPC, today, not all traffic is supported over Direct Connect.

Everything behind the MGW and CGW within VMware Cloud on AWS can be seen as a separate routing domain and today transitive routing is not supported within the solution shown below. Thus, the traffic that is currently supported over Direct Connect is ESXi Management, cold migration or network file copy/NFC, and live migration/vMotion traffic. Today, Direct Connect can carry the supported traffic natively, and all other traffic like management appliance traffic and compute/workload traffic can be carried over VPN. The goal is to eventually have all traffic supported natively over Direct Connect.

Figure 3: VMware Cloud on AWS - Connectivity Options for Hybrid Cloud

Figure 3: VMware Cloud on AWS - Connectivity Options for Hybrid Cloud

With the above understanding, there are several possible designs with Direct Connect. However, before looking into these designs it’s important to understand some of the terminology that comes up with Direct Connect, especially important to understand is the difference between Public Virtual Interface (Public VIF) and Private Virtual Interface (Private VIF). Below, I briefly outline what Direct Connect is and the details on both Public and Private VIF.

Direct Connect: Establishes a private dedicated network connection from on-prem to AWS

Direct Connect Benefits Include:

– increased bandwidth throughput

– decreased latency

– provides a more consistent network experience than Internet-based connections

Direct Connect can be established with a Public VIF or a Private VIF.

Public Virtual Interface (Public VIF)

– Private dedicated connection to AWS backbone

– Uses public IP address space and terminates at the AWS region-level

– Reliable consistent connectivity with dedicated network performance to connect to AWS public endpoints: (EC2, S3, RDS)

– Customers receive Amazon’s global IP routes via BGP, and they can access publicly routable Amazon services

Private Virtual Interface (Private VIF)

– Private dedicated connection to AWS backbone>

– Uses private IP address space and terminates at the customer VPC-level

– Reliable consistent connectivity with dedicated network performance to connect directly to customer VPC

– AWS only advertises entire customer VPC CIDR via BGP

– AWS public endpoint services are not accessible over Private VIF

Between the options of Pubic VIF and Private VIF for Direct Connect, although both will work, customers typically prefer Private VIF; the reasons for this will be discussed later in the post when the diagrams and traffic flows for the different options are discussed.

AWS DX routers or routers connected directly to the AWS backbone are located at specific DX locations/colocation/ISP facilities. The below diagram shows connectivity from the customer on-prem environment directly into the AWS DX router at the DX location.

Figure 4: VMware Cloud on AWS and Direct Connect Deployment with no Customer Switch/Router at DX Location

Figure 4: VMware Cloud on AWS and Direct Connect Deployment with no Customer Switch/Router at DX Location

The diagram below shows that customer may also have a switch/router sitting at the DX location, in which case a cross-connect can simply be done between the customer and AWS devices. In this case, an ISP is providing connectivity (MPLS/VPLS, etc) from the customer on-prem to DX location where cross-connect hooks into AWS backbone via Direct Connect.

Figure 5: VMware Cloud on AWS and Direct Connect Deployment with Customer Switch/Router at DX Location

Figure 5: VMware Cloud on AWS and Direct Connect Deployment with Customer Switch/Router at DX Location

To get started deploying Direct Connect, customers need to log into the AWS Portal and request a Direct Connect connection. At his point they will be able to select the respective DX location/facility they would like the Direct Connect connection to be from and respective port speed. Customer will be sent a Letter of Authorization (LOR) which they can provide to the DX location facility to run the cross connect.

Once the connection is established, customer can select what type of VIF they want to create (Public VIF or Private VIF). Customer will be asked for specific information in regards to the VIF configuration. Below is an example of a Private VIF connection. Within the portal, I have entered the specific information in regards to VLAN, IP Subnet, BGP ASN, etc, to use. This configuration will be pushed down to the respective devices on the AWS side and customer will be able to download the configuration needed to be configured on their end. As can be seen, once the connection is established and configuration in place, the entire VPC CIDR of customer’s VMware Cloud on AWS VPC is advertised to the on-prem environment.

Figure 6: VMware Cloud on AWS and Direct Connect Deployment Using Private VIF

Figure 6: VMware Cloud on AWS and Direct Connect Deployment Using Private VIF

Now that we have a basic foundation of VMware Cloud on AWS and Direct Connect, let’s take a look at the current different deployment models with VMware Cloud on AWS and Direct Connect.

This first deployment model shown below is the preferred deployment model for most customers deploying Direct Connect; it leverages Direct Connect with Private VIF. As mentioned prior, Private VIF uses private IP address space and terminates at the customer VPC-level. Customer’s prefer this solution because it connects directly to their VPC in VMware Cloud on AWS and only the VPC CIDR is advertised back to customer’s on-prem; for this reason, it’s also seen as a more secure option. With private VIF, Direct Connect terminates on a AWS Virtual Private Gateway (VGW) within the VMware Cloud on AWS VPC; this VGW component is not visible and is transparent to users.

Here ESX Managment, cold migration (NFC), and live migration (vMotion) traffic is carried natively over the Direct Connect connection. An IPSEC VPN connection is established from on-prem to the MGW in VMware Cloud on AWS to carry the management appliance traffic. Additionally, IPSEC VPN or L2VPN connectivity to the CGW in VMware Cloud on AWS can be used for compute. L2VPN is leveraged here to provide the consistent network on both sides for vMotion.

Figure 7: VMware Cloud on AWS and Direct Connect Deployment Using Private VIF

Figure 7: VMware Cloud on AWS and Direct Connect Deployment Using Private VIF

Another deployment model with Direct Connect is to use public VIF. Similar to Direct Connect with private VIF, this is also a private dedicated connection to the AWS backbone providing for high bandwidth and low latency connectivity. However, instead of private IP addresses, public IP addresses are used and the Direct Connect connection terminates at a construct in the AWS region-level.

Since Direct Connect terminates at the region-level, an IPSEC VPN over Direct Connect is used from on-prem to MGW in VMware Cloud on AWS SDDC. This is used to carry ESX Management, cold migration (NFC), and live migration (vMotion) traffic. In addition, it’s used to carry the management appliance traffic as well. As in the prior example, IPSEC VPN or L2VPN connectivity to the CGW in VMware Cloud on AWS SDDC can be used for compute. L2VPN is leveraged here to provide the consistent network on both sides for vMotion. Another difference from the prior example is that the VPN connectivity for both the MGW and CGW goes over Direct Connect with public VIF rather than going over Internet.

Additionally, since public IP addresses are used here, AWS advertises their entire routable address space; thus, Direct Connect with public VIF can also be used to access public AWS native services from on-prem.

As mentioned prior, this is a less preferred model by customers than the prior example as customers prefer direct connectivity into their VPC. With this model customers leverage additional security policies/firewall rules on-prem to ensure only their respective subnet(s) in VMware Cloud on AWS can access the access provided over Direct Connect.

Figure 8: VMware Cloud on AWS and Direct Connect Deployment Using Public VIF

Figure 8: VMware Cloud on AWS and Direct Connect Deployment Using Public VIF

It’s also possible to deploy Direct Connect and leverage both Private and Public VIFs. The deployment below is very similar to the first deployment with Direct Connect using private VIF only, except, instead of deploying VPNs over Internet, the VPNs that carry the management appliance traffic and compute/workload traffic are deployed over Direct Connect with public VIF. VPN over Direct Connect provides better and more consistent performance than VPN over Internet.

Figure 9: VMware Cloud on AWS and Direct Connect Deployment Using Private and Public VIF

Figure 9: VMware Cloud on AWS and Direct Connect Deployment Using Private and Public VIF

In all of the above deployment models L2VPN, provided by NSX, is leveraged to provide consistent networking across on-prem and the respective SDDC in VMware Cloud on AWS. An important thing to note here is NSX is not required on-prem. Customer’s can deploy the unmanaged/standalone NSX L2VPN client on-prem; this can also be deployed in active/standby mode for a high level of resiliency. A more detailed follow-up blog will provide additional details on leveraging L2VPN from on-prem to SDDC in VMware Cloud on AWS.

Figure 10:  VMware Cloud on AWS Deployment with Direct Connect and L2VPN

Figure 10: VMware Cloud on AWS Deployment with Direct Connect and L2VPN

Check-out the below video where I demo the VMware Cloud on AWS solution leveraging AWS Direct Connect and L2VPN in a hybrid cloud deployment; a VM is vMotioned from on-prem to a SDDC in VMware Cloud on AWS and then vMotioned back to on-prem.

Follow me on Twitter: @Humair_Ahmed

This entry was posted in Amazon, AWS, Labs, Network Architecture, Networking, Technology, Virtualization and Cloud Computing, VMware, VMware and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

1 Response to VMware Cloud on AWS with Direct Connect: NSX Networking and vMotion to the Cloud with Demo

  1. Wow, such an amazing post.

Leave a Reply

Your email address will not be published. Required fields are marked *


− two = 3