Cisco Nexus 5000 and I/O Consolidation

Amr Ibrahim is a Global Knowledge instructor who teaches and blogs from Global Knowledge Egypt.

As I mentioned in a previous post, Cisco has a complete portfolio for data centers to support the ongoing journey towards cloud computing through pervasive virtualization. One of those products we will discuss is the Nexus 5000. This is the first Cisco switch that supports a new technology called  I/O consolidation. But first, before we discuss what I/O consolidation is, let us examine what’s new for this technology in the new data centers according to Cisco Data center 3.0.

Before the addition of virtualization in the data center, the relationship between the OS and the  physical hardware was one-to-one where we used to install the OS along with all of its applications on one physical machine. The OS had full access to the machine’s resources, CPU, memory, network, and storage.

Some applications require access to external storage. They not only use the internal disk but also require access to an external disk (technically speaking it is called LUN, which stands for Login Unit Node) which can be reached using SAN (Storage Area Network). This provides the applications with the access they need for another type of adapter called HBA (Host Bus Adapters). Now your machine has two adapters: the NIC for network access and the HBA for storage access to reach those external disks or LUNs. But let me ask you a question. What if the NIC failed? You will lose access to the network, and, if the HBA failed, you will lose access to the external storage. That is why we cannot just  install one NIC or one HBA. We need at least two NICs and Two HBAs just to make sure that if one NIC or HBA failed, we still can use the other adapter.

So for those servers you will end up with at least four adapters: two NICs and two HBAs. For each NIC you will need a copper cable, but for the HBA you must use fiber cables. At the access layer you should have also have four access switches, two network switches, and two SAN switches to terminate connection from all four adapters while providing redundancy.

Now this is the case with the applications that require access to storage, but for applications that require network access only NICs and network switches are enough. After adding virtualization each machine running a hypervisor must have access to external storage which means that all your servers should have HBAs and NICs and at the access layer network switches and SAN switches. The problem with that is it will create additional management overhead for each server as you will need to make sure the server has the latest firmware for each adapter and is properly con figured for redundancy or load balancing. Not only does it create management overhead for the network and SAN switches but this also increases the number of cables required by each server as each server will now require at least two copper cables for the network and two fiber cables for the SAN access.

As you can see, virtualization mandates the existence of two separate infrastructures: the network infrastructure, which is the network adapters installed inside the servers along with the network cables that connect to the switches for network access, and the SAN infrastructure which is the HBA along with the fiber cables that connect you to the SAN switches and provide access to the external disks or LUNs.

The network physical infrastructure carries the Ethernet traffic and the SAN physical infrastructure carries the FC traffic which is the protocol used in SAN. Now the question is, how can we minimize the number of physical devices used to provide servers with network and SAN access ? Is this possible? Yes it is with I/O consolidation. I/O consolidation allows us to use only one physical infrastructure to carry both the FC traffic and the Ethernet traffic .

I/O consolidation means carrying two different types of traffic over a single physical infrastructure. Does this mean that we can use the network physical infrastructure to carry both the Ethernet traffic and the FC traffic at the same time? Yes, that is exactly what we are talking about. But wait a minute, I understand how we can send Ethernet traffic over network switches, but how we can send FC traffic over network infrastructure? The answer is encapsulation. We will encapsulate FC frames in side Ethernet headers so from the outside it will appear as any normal Ethernet frame, and this is called FCOE (Fiber channel over Ethernet ).

FCOE is supported by the Nexus 5000 family, which means that one Nexus 5000 can replace two switches at the access layer— one network and one SAN switch, thus decreasing the number of access layer devices by fifty percent. But does this change on the access layer have anything to do with the servers? Yes, of course: instead of having two NICs and two HBAs on the server side, we can replace all four adapters with a single adapter called the CNA(Converged Network Adapter ). This is an adapter that the server can use to send either Ethernet traffic or FC traffic.

So as you see, the Nexus 5000 will allow you to use virtualization and at the same time reduce your physical footprint, which will also decrease the CAPEX and OPEX cost.

In this article

Join the Conversation

1 comment

  1. marcello Reply

    thank you for the clean explainantion, also about flogi!