Dell Networking and VMware NSX: Bridging Between Logical & Physical Networks

In a prior blog, Creating Logical Networks and Services with VMware NSX on Dell Infrastructure, I discussed how easily VMs can be moved from physical (VLANs) to logical networks (Network Virtualization Overlays or NVOs). In practicality, there will almost always be some physical resources on a network that are not virtualized or remain on the physical network (VLANs). So how can VMs on a logical network communicate with resources on the physical network?

One method of communication can be done with the VMware NSX L3 Edge Services Router. However, if you want to just bridge between the logical and physical entities such as bridging between VXLAN to VLAN, the VMware NSX L2 Gateway can be used and this specific use case is discussed in more detail in this blog.

First, it is important to understand that currently there exists two versions of NSX. NSX-vSphere (NSX 6.x) for ESXi-only environments and NSX-MH (NSX 4.x) for multi-hypervisor environments (ESXi, KVM, Xen). For NSX-vSphere, only a software version on the NSX L2 Gateway is officially supported; this functionality is provided within the kernel via a kernel-level module similar to kernel-level modules for VXLAN, Distributed Logical Routing (DLR), and Distributed Firewall. All of these kernel level modules are installed into the VDS via the NSX Manager virtual appliance. It is important to note that NSX-vSphere does not use Open vSwitch (OVS) or OpenFlow; it utilizes kernel-level modules installed directly into the VDS and a message bus system known as RabbitMQ.

For NSX-MH, both software and hardware NSX L2 Gateways are supported. A main difference from NSX-vSphere is that OVS is utilized and leverages OpenFlow for communication. Open vSwitch Database Management Protocol (OVSDB) is an open standard draft and allows for switch packet forwarding visibility and control. Hypervisors leverage the OVSDB protocol to setup a management communication channel with the control cluster.

A NSX Hardware L2 Gateway, such as the Dell S6000 switch, is a hardware switch with VXLAN encapsulation/de-encapsulation functionality within the ASIC which allows for line-rate bridging between VXLAN and VLAN. The S6000 switch and other NSX Hardware L2 Gateways also implement OVSDB support to allow for integration with NSX and the same visibility and control provided by OVSDB on the hypervisors.

Now that we have that basic understanding out of the way, in this blog, I’ll explain in detail how to implement NSX L2 Gateway with NSX-vSphere (6.x). Below is the logical network of my lab setup that I have also shown in prior blogs.

Logical design of Dell-VMware NSX setup

Logical design of Dell-VMware NSX setup

As you can see, I have one non-virtualized server on VLAN 31 connected to the physical network. It’s actually connected to a pair of ToR Dell S4810 1/10 GbE switches; I used a cloud here to abstract away the physical network. The important thing to note is that the server is sitting on the physical network in VLAN 31. The physical server is running Windows Server 2008 R2 Enterprise; however, this fact is rather irrelevant for what I’m demonstrating; the server could be running any OS. The physical server has an IP address of ‘10.7.1.3/24’.

Note the virtual machine (VM) I have sitting on my ‘Bridged-App Tier’ logical switch with VXLAN Network Identifier (VNI) 5004. The VM has an IP address of ‘10.7.1.4/24’. So it is in the same subnet as the non-virtualized physical server sitting on VLAN 31. VMware NSX L2 Gateway can be utilized here to bridge between the logical network (VNI 5004) and the physical network (VLAN 31).

The NSX L2 Gateway is deployed via the DLR Control VM sitting on the edge server within the Edge Cluster. Below is a snapshot of the Deployed NSX Edge Appliances as shown via the vSphere Web Client after the vCenter NSX Plugin has been installed via NSX Manager. Note, to keep the image less cluttered and readable, all fields of the NSX Edges are not shown.

Deployed VMware NSX Edge Appliances

Deployed VMware NSX Edge Appliances

By double clicking the second line or the DLR Control VM, and then clicking the ‘Manage’ tab and ‘Bridging’ button, you will see the area where the NSX L2 Bridging is configured as shown below.

Bridging Tab of DLR Control VM

Bridging Tab of DLR Control VM

From here, all that needs to be done is to click the ‘+’ sign, name the NSX L2 Bridge, select the logical switch configured with the respective VNI, select the distributed virtual port group configured with the respective VLAN, and click the ‘Ok’ button.

Configuring NSX L2 Bridge

Configuring NSX L2 Bridge

Finally, as usual, you will be presented with a final dialog to either commit or revert the change, click the ‘Publish’ button to commit.

You must click the 'Publish' button to commit the change

You must click the 'Publish' button to commit the change

You can now console into the VM on the ‘Bridged-App Tier’ logical switch with VNI 5004 (IP: 10.7.1.4/24) and confirm via the ‘ping’ command that you can reach the physical Windows Server 2008 host on VLAN 31 (IP: 10.7.1.3).

Communication between the logical (VNI 5004) and physical (VLAN 31) network via NSX L2 Bridge

Communication between the logical (VNI 5004) and physical (VLAN 31) network via NSX L2 Bridge

If you haven’t already, please checkout the Dell-VMware NSX Reference Architecture (RA) whitepaper.

Follow me on Twitter: @Humair_Ahmed

This entry was posted in Dell, Dell Force10, Labs, Network Architecture, Networking, Protocols, Technology, Virtualization and Cloud Computing, VMware, VMware and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


+ seven = 11