VMware Software Defined Data Center (SDDC) technologies like vSphere ESXi, vCenter, vSAN, and NSX have been leveraged by thousands of customers globally to build reliable, flexible, agile, and highly available data center environments running thousands of workloads. I’ve also discussed prior how partners leverage VMware vSphere products and NSX to offer cloud environments/services to customers. In the VMworld Session NET1188BU: Disaster Recovery Solutions with NSX, I discussed how VMware Cloud Providers like iLand and IBM use NSX to provide cloud services like DRaaS. In 2016, VMware and AWS announced a strategic partnership, and, at VMworld this year, general availability of VMware Cloud on AWS (VMC on AWS) was announced; this new service is the focus of this post.
With VMC on AWS, customers can now leverage the best of both worlds – the leading compute, storage, and network virtualization stack enabling enterprises for SDDC can now all be enabled with a click of a button on dedicated, elastic, bare-metal, and highly available AWS infrastructure. That’s pretty cool, and, since it’s a managed service by VMware (customers don’t have root access to hosts/management components), customers can focus on the apps and let VMware handle the management/maintenance of the infrastructure and SDDC components.
Deploying a vSphere environment with ESXi, vCenter, vSAN, NSX all included, configured, and working has never been so easy. Once you are setup with VMC access and have ability to deploy SDDCs, you will see a screen like the below. From here it’s as simple as clicking the Create SDDC button, linking to your AWS account and making some very basic selections such as VPC and subnet to link to/use with the SDDC.
Just like that, a SDDC is deployed and available complete with all infrastructure components and configuration – vSphere ESXi hypervisors, vCenter, NSX Manager, NSX Controllers, NSX Edges, and vSAN.
Once deployment is complete, you will see a new SDDC under the SDDCs tab with a brief summary view of the resource utilization of the respective SDDC. Below you can see I have already deployed a SDDC in the US West (Oregon) region.
Today, two AWS regions are available, US West (Oregon) and US East (N. Virginia) with more regions planned for the near future. The US East (N. Virginia) region was announced recently at AWS re:Invent 2017. Some pretty heavy duty servers are provided with a minimum of 4 hosts per SDDC. Each host has 2 CPUs, 36 cores, 72 hyper-threads, 512GB RAM, NVMe attached flash storage (3.6 TB cache plus 10.7 TB raw capacity tier).
Clicking the title of the SDDC takes you to the summary dashboard view of the SDDC as shown below.
Clicking the Network tab you can see from the auto-generated diagram that initially VMC is connected to the respective AWS VPC configured but no connectivity is yet configured to any on-prem environment. From further below on the same Network tab additional configuration can be done for firewall, NAT, VPN, etc. What’s important to note here is all these networking services used by VMC on AWS are enabled by NSX – you get NSX logical networks, firewall/security capabilities, NAT, VPN, etc. Some of this configuration will be covered later in the post. Note below, the default Deny All firewall policies configured within the environment for a zero-trust security model.
In the below screen shot, you can see I have setup connectivity to on-prem. In this example, I leverage IPSEC VPN to connect to on-prem. You can see I have two IPSEC VPN connections – one for management/ESXi traffic and one for compute/workload traffic. Within VMC, a seperate NSX Edge is used for both the Management (MGW) and Compute (CGW) Gateway.
For the MGW IPSEC VPN configuration shown below, it can be seen that the Local Gateway IP is the public IP address assigned to the MGW Edge during SDDC creation time. The Remote Gateway Public IP is the public IP address for the on-prem VPN endpoint. The Local Network is the local VMC network that will be reachable from on-prem and the Remote Networks is what is reachable from VMC to on-prem. The respective traffic will still need to be allowed through MGW Edge firewall.
Note, the vCenter Management rule allowing for VMC vCenter management access from an external client; the external client in this case has the IP address of 204.237.202.117. The CGW IPSEC VPN configuration will be covered later in this post.
I can also configure Hybrid Linked Mode (HLM) between VMC and my on-prem vCenter to allow for cold migration of workloads over IPSEC VPN; in this case the cold migration traffic would traverse the MGW IPSEC VPN connection.
At AWS re:Invent 2017 new capabilities of L2VPN and AWS Direct Connect were also announced. See here for additional details.
The below screenshot from my lab environment show my VMC vCenter; it can be seen I have created three logical networks or VXLAN-backed networks: VMC_Web, VMC_App, and VMC_DB.
A logical network can be created by simply clicking the Add button and specifying the respective network info as shown below.
When a logical network is created, the connectivity and routing is all automated. The networks are automatically connected to the Compute Gateway distributed logical router (DLR) – a logical interface (LIF) is automatically configured on the DLR, and the routes to reach the logical networks are automatically configured in the Compute Gateway DLR/NSX Edge for both East/West and North/South connectivity. Since the topologies in VMC are prescriptive, all of this can be automated and there is no need for a DLR Control VM or routing protocols inside the VMC environment.
I have also deployed VMs in VMC and placed them on the respective NSX logical networks created above. Below I show the VMs VMC_Web_VM_1 and VMC_App_VM_1 on their respective logical networks.
Above, you can see that the VMC_Web_VM_1 and VMC_App_VM_1 are currently on the same VMC ESXi host, esx-2.sddc-35-162-46-174.vmc.vmware.com. Below I vMotion the VMC_App_VM_1 VM to VMC ESXi host esx-0.sddc-35-162-46-174.vmc.vmware.com and ensure it is vMotioned to the same NSX logical network which is spanning across all ESXi hosts to ensure consistent networking.
Below, I select the VMC destination ESXi host esx-0.sddc-35-162-46-174.vmc.vmware.com as the destination ESXi host.
I ensure the destination network is the same and click through the rest of the workflow to initiate the vMotion.
Below, I show that the VMC_App_VM_1 VM has been vMotioned to VMC ESXi host esx-0.sddc-35-162-46-174.vmc.vmware.com.
Below, from my VMC_Web_VM_1 VM on the VMC_Web logical network, you can see I can ping the the VMC_App_VM_1 on the VMC_App logical network as expected.
Since I have connected VMC to on-prem via IPSEC VPN, I can also communicate to my workloads on-prem. My current setup using IPSEC VPN for the MGW and CGW gateways is shown below.
On-Prem I have VMs that need to be able to communicate to the Web VMs on the VMC VMC_Web logical network.
In this setup, the MGW IPSEC VPN connection is used for management and ESXi traffic and the CGW IPSEC VPN connection is used for compute/workload traffic. With VMC, policy-based IPSEC VPN is used. Thus, in my CGW IPSEC VPN configuration, I expose the VMC_Web as the Local Network as shown below so my on-prem workloads can communicate with my compute workloads on the VMC_Web network in VMC.
Also, note, for this test I also allow for ICMP traffic through the CGW Edge firewall for the respective workloads. The Remote Networks is the compute/workloads reachable from VMC to on-prem. Similar to MGW, the Local Gateway IP is the public IP address assigned to the CGW Edge during SDDC creation time, and the Remote Gateway Public IP is the public IP address for the on-prem VPN endpoint.
On-prem, I have a VM (10.114.223.70) on a VLAN-backed network. Below it can be seen the on-prem VM can communicate to the VMC_Web_VM_1 (10.61.4.1) on the VMC_Web network.
My on-prem VMs are now able to communicate to my Web VMs on VMC. I can also enable VMs/workloads on VMC to communicate directly out to the Internet. I do this by first requesting a Public IP address within VMC and then configuring NAT. In this case, I configure a 1:many NAT rule for my entire VMC_Web CIDR block. I also ensure the correct security policies are applied to allow for DNS and communication via HTTP/HTTPS.
You can see below I’m able to access the HumairAhmed.com website from my VMC_Web_VM_1 (10.61.4.1) VM.
Follow me on Twitter: @Humair_Ahmed