Posts Tagged ‘Cisco Nexus’

Dell MXL/IOA with FC Flex IOM: Network and Infrastructure Convergence with NPIV Proxy Gateway

Tuesday, January 21st, 2014

Dell FC Flex IO Module for Dell MXL/IOA Blade Switch

Dell FC Flex IO Module for Dell MXL/IOA Blade Switch


Very exciting news from Dell today as they announced the availability of the Dell FC Flex IOM module which transforms the Dell M1000e MXL/IOA blade switch into a NPIV Proxy Gateway (NPG). This allows the MXL/IOA to bridge between Ethernet and Fibre Channel allowing for network convergence with FCoE. (more…)

Twitt

Cisco vPC with Dell S4810 at ToR

Saturday, September 21st, 2013

Cisco’s vPC technology is similar to Dell’s VLT; it enables an access/leaf switch or server to have single LAG connecting up to two separate switches. This allows for an non-blocking, multipathing scenario. You can read more about Dell’s VLT technology and its advantages on my prior blog, Dell Force10 – Layer 2 Multipathing via Virtual Link Trunking (VLT) In this blog, I will configure Cisco vPC between two Cisco Nexus 5548UP switches [NX-OS 5.1(3)N2(1)] down to a third ToR Cisco Nexus 5548UP switch [NX-OS 5.1(3)N2(1)]. I will then replace the third Cisco Nexus 5548UP switch at ToR with a Dell S4810 switch [FTOS 9.0]. (more…)

Twitt

Understanding Cisco Nexus 2000 Series Fabric Extenders

Tuesday, August 13th, 2013

Cisco Nexus 2248PQ Fabric Extender

Cisco Nexus 2248PQ Fabric Extender

Cisco Nexus 2000 Series Fabric Extenders (FEX) behave and can be considered as remote line cards for Cisco Nexus switches. The fabric extender acts as an extension to the parent Cisco Nexus switch fabric; the fabric extender and the parent Cisco Nexus switch together form a distributed modular system. With this architecture, the fabric extender typically is at top-of-rack (ToR) with the parent Cisco Nexus switch at end-of-row (EoR). (more…)

Twitt

Cisco Nexus 5548UP – Configuring the Management Interface

Wednesday, July 3rd, 2013

This is a relatively simple blog on configuring the Cisco Nexus 5548UP management interface. When Cisco moved away from IOS to NX-OS for the operating system on their new Nexus datacenter switches, some of the commands and syntax for even the simplest tasks have changed. For easy reference for a common task such as setting the management interface I decided to write this quick blog. The network diagram is below. I am using a Dell S4810 [FTOS 9.1] for my management switch to which both the Dell PowerEdge R710 server and Cisco Nexus 5548UP [NX-OS 5.2(1)N1(1b)] connect. I also use a 1000Base-T SFP transceiver to convert the Dell S4810 fiber port to copper, so I can connect it to the management port of the Cisco Nexus. (more…)

Twitt

Buying a Converged/FCoE Switch?

Saturday, May 11th, 2013

I felt the urge to write this article after receiving several inquiries about buying used Cisco Nexus 5010 and Nexus 5020 switches because they are available on some third-party site at large discounts – at least compared to the new Cisco Nexus switches. First, let me mention that both of these switches are EOL. Second, as I always say, whether the switch will work for you or not will depend on your network requirements. Let me mention a few things here to take into consideration when looking at some of these old EOL switches when the thought of saving some $$$$ overwhelm you. (more…)

Twitt

Connecting Dell PowerEdge M I/O Aggregator with Dell Force10 S4810 and Cisco Nexus 5548UP

Monday, February 11th, 2013

Dell PowerEdge M I/O AggregatorThe Dell PowerEdge M I//O Aggregator is a slick blade switch that plugs into the Dell PowerEdge M1000e chassis and requires barely any configuration/networking knowledge. Think of it as an advanced layer 2 switch that provides expandable uplink connectivity. The base blade comes with 2 x 40 GbE ports that by default are configured as 8 x 10 GbE ports. If desired, these ports can also be used as 40 GbE stacking ports. The Dell PowerEdge M I/O Aggregator also provide 32 internal 10 GbE connections for Dell blade servers you can install in the Dell PowerEdge M1000e chassis. If quarter-height blade servers are used, the M1000e can support up to 32 servers. This is pretty cool, not only in terms of functionality, but also in terms of the consolidation and the mess of Ethernet/power cables avoided by not using standalone components. (more…)

Twitt