Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE

Checkout my latest Dell Networking document and learn how to deploy FCoE using the Dell S5000 converged switch: “Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE“.

Dell S5000 converged switch
In the Dell Networking S5000: The Building Blocks of Unified Fabric and LAN/SAN Convergence whitepaper I demonstrated and explained the movement from a traditional non-converged LAN/SAN network to a converged LAN/SAN infrastructure and how the Dell S5000 switch is an ideal solution for this transition. In addition, I covered the many benefits of moving to a converged infrastructure such as less maintenance and considerable cost savings. The Dell S5000 converged switch with its unique modular design allows end users to migrate to a converged solution and increase port count at their own pace without replacing the entire switch. This benefit is unmatched in the industry. In this deployment guide, I cover detailed Dell S5000 topology and configuration examples.


Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

6 Responses to “Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE”

  1. Hussain Al Sayed says:

    I have DELL Compellent attached to S5000 switches as iSCSI and I’m getting my new R730 with HBA to be connected via Fiber Channel. Do you recommend FCoE over Fiber? I have the flexibility to go down both routes. But which one is best, easy to configure and maintain with less complexity, FCoE or FC attached?

  2. Humair says:

    Hi Hussain,

    A few things to consider:

    - Dell Compellent storage does not currently support DCB for iSCSI, so you will not be able to run ISCSI and FCoE together over a lossless DCB enabled fabric. FCoE of course requires DCB where iSCSI does not. If you were not planning to run a converged network with both iSCSI and FCoE then this is not an issue. Dell EqualLogic arrays do support DCB for iSCSI; I recenly demonstrated this in a Dell-VMWare reference architecture white paper:

    - FCoE and FC are both mature technologies. FCoE is beneficial in that it can save a lot of costs on hardware (less CNA, less switches, less cables, etc.). I would say FC is probably easier to implement as you basically just plug the FC switches into the HBA and configure any zoning. FcoE requires a bit more configuration on the Ethernet side (FCoE MAP, DCB, etc) but has benefits in savings and managing one network instead of two.

    - S5000 does not support FCoE storage targets. This means you have three options:

    1.) Use FCoE from server to S5000 and directly connect Compellent Storage FC Ports to S5000 FC ports. This requires the S5000 to run in Full Fabric Mode. S5000 provides the Full Fabric services and all zoning is done on S5000.

    2.) Use FCoE from server to S5000 and connect S5000 to another FC switch that supports NPIV (Most Brocade or Cisco FC switches). The FC switch provides the Full Fabric Services and all zoning is done on the FC switch. Connect the FC switch directly to the Compellent Storage FC Ports. S5000 must run in NPIV Proxy Gateway mode (NPG Mode).

    3.) Connect server directly to S5000 via FC HBAs. However, currently, only one universal module (12 FC ports) are supported per S5000 switch; although it is possible to connect S5000s together via ISL links to gain more FC ports.

    I should mention you can connect both FC and FCoE hosts to the S5000 when it is in Full Fabric mode.

    Depending on your requirements, I would probably recommend option 1 or 2 as they are more practical. Or if you already have an existing backend SAN, I would then recommend either just keeping that as is or if desiring the benefits of convergence, leveraging option #2.

  3. Amad Seni says:

    Good day,

    we have the following:

    2 x R730 with 2 x Emulex OneConnect CNA HBAs
    2 x S5000 each with 1 x Unified Module and 1 x Ethernet Module
    EMC VNX 5200 with dual FCoE ports and dual 10G iSCSI Modules
    No dedicated FC Switches available.

    From what we’ve been reading, it seems we cannot connect the SAN to the S5000 and zone to the CNAs on the servers directly?

    Please confirm if anything else is required to get this scenario working without adding extra complexity.

  4. Humair says:

    Hi Amad,

    You can run the S5000 in Full Fabric Mode which will allow you to connect the S5000 directly to storage if desired. All the zoning will then be done on the S5000 switch as the S5000 switch acts as the FCF. When S5000 was first released, it only supported NPG gateway mode which does not allow direct connect to storage but allows interoperability between different vendor FC switches;.Full Fabric Mode was released later and allows for direct connect to storage.

  5. Daniel says:

    Good day,

    How many Unified Fabric Modules are currently supported per single S5000 unit?

    Still 1?

    Thank You for response in advance.


  6. Jean-baptiste says:

    Hi, I’m currently trying to build a stretch c4020 non uniform cluster, each of the SC4020 connecting to a pair of S5000 in VLT at each sites, and using FCoE to connect down to a M1000e chassis on each site as well. I’ve got mVLT between each pari of S5000 at each site as well (thanks to all your docs you have published to get that working;). But I’m running into a problem now. I’ve got DCB for sure on my s5000, but I need to enable iSCSI for the replication link between the SC4020 (for the time being just connecting the Repl 10Gb port of each controller). I’m not able to get that replication working. I can see and ping iSCSI address of each port both site, but replication itself won’t work. I’m using 12*10gbBaseT module on the s5000 to connect the iSCSI Repl port (each s5000 has 3 modules (FC Unified, 10gbSFP+ and 10gBaseT).
    Would it be possible my problem is due to the fact that DCB is active on my s5000 and that prevent the iSCSI communication between the SC4020 to work correctly? Basically initiator/target are only the SC4020, the s5000 just be used for level2 stretching between sites (got 4*10gbs level2 fiber between sites). Thanks for your comment regarding this problem. There is a very few docs on setting up the iSCSI replication port of the SC4020 when used in FC mode. Reds, jean-baptiste

Leave a Reply


9 × five =