The Main Components of a Unified Fabric
Ten or so years ago, network administrators were approached with a new idea: a system based in this new “Voice over IP” (VoIP) protocol that would carry both data and voice on the same wire. While managers were very excited about this new technology because it would save them money on infrastructure (cabling), PBX operators were not amused and did not take kindly to their 66-style punch blocks being rendered obsolete. Network administrators were left to learn and integrate this technology and everything else that went around it, such as Quality of Service and security.
We now have the same discussion with Storage Area Network (SAN) administrators. People running data centers ask to trim the bottom line but still want ultimate design and infrastructure flexibility in an era where servers are not purchased for specific applications but rather to increase resources in the virtual cloud. Cisco has released a new product line called Nexus that makes data center managers and technical architects think twice about their equipment needs.
To quell any bandwidth issues, the Nexus switches offer 10GB connectivity to the hosts with some I/O modules capable of 40GB and even 100GB per port. This means a single physical optical cable could provide a server SAN/LAN and high-speed connectivity. This makes a lot of people happy, namely the server administrators and data center managers. The first group is happy because you address their needs and the second group because you reduce the costs to provision servers on the network.
Merging the SAN / LAN and infiniband capabilities into one wire and switch defines the Unified Fabric. However, network administrators are often left with the task of understanding how this Unified Fabric is going to work. SAN administrators have to worry about logical unit numbers (LUNs), initiators, targets, masking, and zoning, as well as the well-being of their storage arrays. Network administrators will be responsible for taking the native Fibre Channel traffic out of the SAN area of the data center, and transporting it to the hosts using Unified Fabric.
Cisco switches such as the Nexus 5000 series offer several options such as built-in FCOE/CE (Classical Ethernet) ports, as well as native Fibre Channel expansion modules to be able to communicate and/or convert an existing Fibre Channel infrastructure. In addition, there are newer models, such as the Cisco 5548UP and 5596UP switches that offer a “Unified Port” that can turn any port into a native FCOE/FC or FCOE/CE, giving you the ultimate flexibility. To top it off, storage vendors now sell FCOE storage processors (SPs) that can replace the need for Fibre Channel at the source and eliminate the need for conversion.
The Nexus switches were born of a fusion between Catalysts and MDS and can handle everything an MDS could do in the past. As a network administrator, it is possible to use Role-Based Access Control (RBAC) to give “storage” permissions to the SAN administrators, and they can continue using tools like Fabric Manager with the Nexus without impacting LAN configurations.
The challenges for the network administrators are numerous. Classical Ethernet is built on a “connect anywhere” and oversubscription model where losing an Ethernet frame is not a problem. On the Fibre Channel side, however, the approach is totally different. Frames that are put on the wire are actually SCSI commands, and the SCSI protocol is built on a presumption that SCSI commands do not fail, and therefore there are no retransmission mechanisms built into the Fibre Channel Protocol.
FCOE does not change that behavior. In fact, FCOE doesn’t change anything but the envelope of the FC frame to make it readable by an Ethernet switch. In Unified Fabric, the segment that connects, for example, a Nexus switch to a server is called a Unified Wire since it will carry both CE and FCOE traffic.
QOS is very important in Unified Fabric since the FC traffic has a “lossless” guarantee. There are several components at play here, but in summary:
- The Nexus switches tag the FC traffic with the highest priority, and
- Virtual Output Queues (VOQ) can be involved for a switch to be certain that a path can be guaranteed for traffic, especially storage traffic.
What network administrators also need to understand are the overall configurations necessary to accommodate this new unified method. In a Unified Fabric model, the network interface cards (NICs) connecting the servers to their switches are called Converged Network Adapters (CNAs), and they are able to send FCOE and CE frames on the same wire. The FCOE Initialization Protocol (FIP) discovers the switch port it is connected to, which the Network Admin will have configured as a trunk carrying a Data VLAN and a Storage VLAN (later connected to a VSAN), and FIP will discover that information to connect. After that, the fabric login (FLOGI) and port login (PLOGI) process can continue for storage network and whatever else needs to happen on the CE side will continue as well.
The Unified Fabric model also brings several Ethernet enhancements to deal with this amalgamation of traffic, including two crucial ones: Link Layer Discovery Protocol (LLDP) and Data Center Bridging Exchange (DCBX). In the Cisco world, when two switches want to discover each other, they usually can exchange information automatically using the Cisco Discovery Protocol (CDP). However, it is proprietary and cannot usually discovery other vendor switches. LLDP operates at layer 2 as well and is open-vendor, so you can now advertise certain capabilities related to the Unified Fabric with it. LLDP will use a tag, length, value (TLV) field to advertise that the switch or server that you are operating will be able to speak the DCBX protocol.
DCBX will allow both endpoints to negotiate certain features such as:
Priority Flow Control (PFC): the ability to prioritize certain types of traffic within classes using virtual lanes. In standard QOS, we can, for example, look at what is CE and what is FCOE. With PFC, we can look into FCOE and identify specific conversations within the protocol, thus giving the network admin more granular control. Additionally, we will now be able to stop a single virtual lane as opposed to an entire interface if a PAUSE frame is received.
- Enhanced Transmission Selection (ETS): tied closely with PFC but enables strict QOS policies on virtual lanes within an interface, giving us the ability to separate the 10GB interface into smaller chunks of bandwidth.
- Logical UP/DOWN: the ability to shut down a certain type of traffic without affecting the rest of the traffic traveling on a particular interface. For example, a network admin troubleshooting a Unified Port could issue a “Shutdown LAN” command to remove all the CE traffic to find out of the CE traffic is an issue.
These capabilities can be discovered automatically between Nexus Switches and generation II CNA adapters, reducing the amount of configuration required by the network administrators. They should be easy to adapt to a multi-vendor network since DCBX is an IEEE standard (although some vendors have been trying to throw their own proprietary information in it).
A Unified Fabric is not something that is out of reach, but it must be considered seriously by all the parties involved. Some data center managers get hung up on the fact that it will save them money and forget about the initial capital expenditure that needs to take place. SAN administrators may want to stonewall the project due to a lack of understanding of what this will actually accomplish, losing the bigger picture goal, which is to remove all the “SAN only” network gear, and network administrators need a strong understanding of Fibre Channel to understand how to verify end-to-end connectivity with the F, E, TE ports, as well as FLOGI and FCNS.
DCUFI — Implementing Cisco Data Center Unified Fabric v4.0 (formerly DCNX5+7)
DCUFD — Designing Cisco Data Center Unified Fabric v3.0
Excerpted from Global Knowledge: The Main Components of Unified Fabric