Creating a LAG between an ESXi vSwitch and a Physical Switch

In this lab I am going to create a LAG (Link Aggregation Group) between an ESXi vSwitch and a physical switch. You can use Cisco, Force10 Dell, Juniper, or any other manufacturer for the physical switch. Depending on the switch you use, the commands may vary on the physical switch for creating a LAG (referred to as port-channel by Force10 Dell and etherchannel by Cisco). I will not get into details of creating a LAG; please reference my earlier posts Creating a Link Aggregation Group (LAG) in FTOS and Setting up Cisco EtherChannels – Static, PAgP, and LACP for this information.
1.

– I already have a static LAG containing two 1 gig ports setup on the physical switch. And this LAG is part of VLAN 100. Two other 1 gig ports are also part of this VLAN. One of these ports is connected to a laptop with the IP of 192.168.1.221. VLAN 100 has an IP of 102.168.1.220.

2.

– I also have two Ethernet cables running from the NIC 2 and NIC 3 ports on the physical ESXi 4.1 server to the switch. The cables are connected to the two ports which are part of the LAG . You can see below that vmnic 2 and vmnic 3 are both part of vSwitch1 on my ESXi server. See my earlier post pNIC, vNIC, and vmNIC Confusion if you need a brush-up on some virtual terminology.

vSwitch1 on my ESXi 4.1 Server
3.

– From the above snapshot, you can see I also have two hosts connected to vSwitch1. I will be using the Kubuntu 10.10 Server Virtual Machine (VM) to ping across the LAG to the VLAN (192.168.1.220) and physical laptop (192.168.1.221). However, before I do that I still need to setup the LAG correctly between the vSwitch and the physical switch.

4.

– What I need to do at this point is setup NIC teaming on the vSwitch. NIC teaming is the procedure of applying policies to a vSwitch or port group to either load-balance based on a specified algorithm or provide failover in-case of hardware failure. I will be setting-up load-balancing.

Below I edit the properties of the VM Network port group. Under the “Load Balancing” drop down box, you can see that there are four options available.

Load Balancing options

Route Based on the Originating Virtual Port ID – this is the default setting and a virtual port ID is assigned to anything that plugs into the vSwitch. Then based on this virtual port ID, the VMkernel assigns a pNIC as an uplink to the guest on the vSwitch. Whenever the guest tries to communicate through the vSwitch out to the physical LAN, the VMkernel will always attempt to pass the traffic through the assigned pNIC (as long as this pNIC is up).

Route Based on IP Hash – with this option a vSwitch can use multiple uplinks at the same time to communicate out to the physical LAN. However, for this option to work, 802.3ad link aggregation must be configured on the physical switch.

Route Based on MAC Hash – operates similar to the virtual port ID policy. vSwitch guests are assigned a single uplink to use, however, this time the VMkernel uses the MAC address to assign and distribute available uplinks. If the link were to fail, the remaining NIC(s) would handle the traffic.

Use Explicit Failover Order – with this option only one NIC is active at any given time; the other NIC is on standby in-case a hardware failure were to occur with the active NIC.

5.

– From the above options I use “Route Based on IP Hash”. As you can see from the below snapshot, from the Kubuntu virtual machine, I am able to ping both the VLAN IP on the physical switch and the physical laptop’s IP!

Selecting NIC Teaming options

Successful ping from VM to physical LAN

Successful ping from VM to physical LAN

This entry was posted in Cisco, Force10 Networks, Labs, Networking, Technology, Virtualization and Cloud Computing, VMware and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

8 Responses to Creating a LAG between an ESXi vSwitch and a Physical Switch

  1. Hussain Al Sayed says:

    Hi Humair,

    I have two s5000 connected in LAG and all my ESXi are connected in redundant mode. Each vSwitch has two uplinks nics and each physical nics connected to different s5000 switches. I don’t have LAG/LACP for the esxi ports, do I have to create 1 LAG and add all physical interfaces of all host to this LAG or for each Networks have to Be in LAG among all ESXi hosts? What I mean is, management network of all hosts should be one LAG, vMotion in one LAG, VM Networks in one LAG?

    I’m not a networking guy, but I would like to learn how can I configure it. The only LACP I have to the S5000 switches are the uplinks from A4810 COREs.

    Appreciate your reply.

  2. Humair says:

    Hi Hussain,

    Assuming you have the vSphere Enterprise Plus license and are using the VDS vSwitch instead of the standard vSwitch, you have multiple options. ESXi with VDS does support LACP and allows for an active-active L2 multipathing technology like Dell VLT or Cisco vPC to be used on the connecting access/ToR switches.

    However, VMware also has its own NIC teaming options where you can get active/active from the server perspective without doing any special LAG/LACP configuration on the access/ToR switches. I prefer to use the VMware NIC teaming options when possible as it doesn’t require any special switch configuration and simplifies the network. This post is quite old, and VMware has since come out with additional NIC teaming options in addition to those mentioned in this post.

    I should also mention the VMware allows for different NIC teaming options per distributed port group. This is pretty cool in that it allows you to use the same uplinks for all all port groups and provide different NIC teaming if desired based on distributed port group. Typically different types of traffic (management, vMotion, storage, etc) will be assigned to different distributed port groups and respective VLANs and the VLANs will be trunked out the server uplinks to the access/ToR switches.

  3. Hari says:

    Hi Humair,

    I was reading your reply above, which actually helps me in certain way. I’m ending up in a confusion, which teaming mode should i use on NSX ESXi side. Please note im not from VMWare side, so im not sure which teaming policy would we prefer.

    1) IP Hash
    2) LACP

    Before we were using “Route based on Virtual Port ID” on VDS, during that time VM’s sitting on same logical switch(same VLAN) on different cluster couldnt able to ping each other. As soon as we put the teaming policy into IP hash it started working. But then VMWare suggested us a option on LACP. But then i got Dell MXL F10 switches which iner-connects ESX hosts and Arista which acts as TOR. MXL switch / blade ports are configured as normal ports no LAG. MXL Uplink ports towards Arista is configured as Port-channel (Static).

    So if we are going ahead with LACP on VMWare NSX side, i need my MXL blade ports to be on LAG and LACP. i understand it needs some config change, but wanted to know which way is the best way to do it ?

    Your response would be greatly aprreciated.

    Thanks
    Hari

  4. Mark James says:

    Happy New Year! May this year be a good year and not 2020 two (or 2020 too)!

  5. דוד ירמיהו says:

    תודה רבה!

  6. James Dickerson says:

    Happy New Year! Luckily this year doesn’t have a 2 at the end so people like Mark James can’t make jokes about the cursed year of 2020

  7. Johnny Dix says:

    Gay sex
    Gay intercourse
    Gay sexual intercourse
    Homosexual sex
    Homosexual intercourse
    Homosexual sexual intercourse
    Homosexual anal intercourse with men
    Gay intercourse with a homosexual man in bed

  8. Thomas Strand says:

    Thank you for making a guide on this. VMware makes it kind of confusing when it comes to link aggregation. Guide and screenshots are very clear.

    Kind of like my ejaculatory fluid. My doctor says my clear ejaculatory fluid is a sign of lack of fertility, but I guess there isn’t really anything I can do about it.

Leave a Reply

Your email address will not be published. Required fields are marked *


two + 2 =