The evolution of Data Center networking technologies



Similar documents
Data Center Convergence. Ahmad Zamer, Brocade

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

Enterasys Data Center Fabric

Ethernet Fabrics: An Architecture for Cloud Networking

Brocade One Data Center Cloud-Optimized Networks

Data Center Networking Designing Today s Data Center

FIBRE CHANNEL OVER ETHERNET

Fibre Channel over Ethernet in the Data Center: An Introduction

VMDC 3.0 Design Overview

BUILDING A NEXT-GENERATION DATA CENTER

Juniper Networks QFabric: Scaling for the Modern Data Center

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Multi-site Datacenter Network Infrastructures

Virtualizing the SAN with Software Defined Storage Networks

Cloud Computing and the Internet. Conferenza GARR 2010

How To Evaluate Netapp Ethernet Storage System For A Test Drive

Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco

Fibre Channel Over and Under

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS

FCoCEE* Enterprise Data Center Use Cases

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Ethernet Fabric Requirements for FCoE in the Data Center

Data Center Evolution and Network Convergence. Joseph L White, Juniper Networks

EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE

VXLAN: Scaling Data Center Capacity. White Paper

Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Technology Overview for Ethernet Switching Fabric

Integrating 16Gb Fibre Channel and 10GbE Technologies with Current Infrastructure

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Data Center Evolution without Revolution

Block based, file-based, combination. Component based, solution based

Network Virtualization for Large-Scale Data Centers

Data Center Network Evolution: Increase the Value of IT in Your Organization

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

Simplify Your Data Center Network to Improve Performance and Decrease Costs

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

over Ethernet (FCoE) Dennis Martin President, Demartek

Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Unified Fabric: Cisco's Innovation for Data Center Networks

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

SummitStack in the Data Center

Fibre Channel over Ethernet: Enabling Server I/O Consolidation

Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led

Avaya VENA Fabric Connect

Introducing Brocade VCS Technology

WHITE PAPER Ethernet Fabric for the Cloud: Setting the Stage for the Next-Generation Datacenter

Data Center Evolution and Network Convergence

3G Converged-NICs A Platform for Server I/O to Converged Networks

Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

VMware Virtual SAN 6.2 Network Design Guide

Optimizing Data Center Networks for Cloud Computing

TRILL for Data Center Networks

Technical Brief: Introducing the Brocade Data Center Fabric

Unified Storage Networking

WHITE PAPER. Best Practices in Deploying Converged Data Centers

Building Tomorrow s Data Center Network Today

Data Center Evolution and Network Convergence

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center

VMware and Brocade Network Virtualization Reference Whitepaper

Next-Gen Securitized Network Virtualization

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric

FCoE Deployment in a Virtualized Data Center

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

Scalable Approaches for Multitenant Cloud Data Centers

Data Centre Networking Evolution

WHITE PAPER. Copyright 2011, Juniper Networks, Inc. 1

The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud

TRILL Large Layer 2 Network Solution

Blade Switches Don t Cut It in a 10 Gig Data Center

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)

Cisco Dynamic Workload Scaling Solution

The Future of Cloud Networking. Idris T. Vasi

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

Designing Cisco Data Center Unified Fabric Course DCUFD v5.0; 5 Days, Instructor-led

SummitStack in the Data Center

DEDICATED NETWORKS FOR IP STORAGE

Scaling 10Gb/s Clustering at Wire-Speed

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

Agility has become a key initiative for business leaders. Companies need the capability

Course. Contact us at: Information 1/8. Introducing Cisco Data Center Networking No. Days: 4. Course Code

David Lawler Vice President Server, Access & Virtualization Group

Building the Virtual Information Infrastructure

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Chris Dagney BROCADE Sr. Sales Engineer. January 23, 2014 ON DEMAND DATA CENTER

NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

Transcription:

0 First International Conference on Data Compression, Communications and Processing The evolution of Data Center networking technologies Antonio Scarfò Maticmind SpA Naples, Italy ascarfo@maticmind.it Abstract the emerging challenges, like simplicity, efficiency and agility, and the new optical-empowered technologies are driving the innovation of the networking in the datacenter. Virtualization, Consolidation and, more generally, the Cloud oriented approach, are the pillars of the new technological wave. A few key technologies, FCoE, TRILL and OTV, are leading this evolution, fostering the development of new networking architectures, models and communication paradigms. In this scenario, both the design models and power/footprint ratios for datacenter are changing significantly. This work aims at presenting the state-of-the art of the different technologies driving the Data Center evolution, mainly focusing on the most novel and evolutionary issues of the networking architectures, protocols and standards. I. INTRODUCTION What are the emerging trends in the Data Center? What is being asked by large enterprises to their CIOs? The new pressures behind these questions are concerning both costs containment along with performance, agility and standards compliance issues. Almost a third of all the IT-related spending is used for the data centers [3]. ICT is more and more recognized as strategic in the companies perspective: the ability to maximize efficiency of the IT spending and the ability of the ICT architectures to follow business requirements with an eye to time to market, have become strategic elements for companies competitiveness. As a direct consequence, we can observe the growth of Cloud Computing and HPC facilities, together with the ever increasing needs for Virtualization/Consolidation projects and massively scalable Data Center. The main focus of this work is on presenting a state-of-the-art perspective about the most significant efforts of the ICT community and industry in responding to these new trends. Accordingly, the following sections present the most interesting technologies that the standardization organizations and vendors are proposing in order to support the emerging requirements and to remove the current barriers to the Data Centers evolution. II. FIBER CHANNEL OVER ETHERNET Fibre Channel over Ethernet (FCoE) is a standard protocol that natively maps Fibre Channel to Ethernet to transport it over cheaper LAN infrastructures. The massive adoption of server virtualization and the availability of 0Gigs Ethernet interfaces are the major drivers of the FCoE technology. The growing computing power of the server platforms allows to increase the number of hosted virtual machines resulting in a greater need of I/O capacity. 0Gigs Ethernet and 8 Gigs FC are a viable answer to this need, 0 Gigs FCoE is a step forward since it allows to address both growing I/0 capacity needs and I/O consolidation in the Data Center networking infrastructure. Also FCoE is a future proof technology because it: is fully compatible with the existing FC infrastructures. is transparent to SCSI Layer, then preserving the FC storage management scales with the availability of 40Gigs Ethernet and 00Gigs Ethernet. A. Standard architecture FCoE as defined by INCITS/ANSI Fiber Channel (T) Technical Committee is designed to be simple: it essentially encapsulates FC frames into Ethernet packets by preserving the upper layer infrastructure. The following picture (Fig. ) shows how FC is encapsulated in a 0Gigs Ethernet frame and the SCSI protocols relationships. Figure. FCoE encapsulation framework, source SNIA Education We however have to note how the above mentioned approach impacts other standards bodies: IEEE and IETF. In fact, Ethernet is not designed to support FC traffic. IEEE and IETF are working in order to create a suitable environment to permit to FC traffic to survive in a Ethernet infrastructure. IEEE creates a lossless Ethernet environment, through a set of Ethernet enhancements named DCB (Data Center Bridging, also called CEE by many vendors, DCE by Cisco and EEDC according to the original Intel term). The DCB framework is defined by the following specifications []: 978-0-7695-458-8/ $6.00 0 IEEE DOI 0.09/CCP.0.30 7

80.Qbb (80.bd PFC frame format): Priority Flow Control (PFC): to control a flow (by pausing transmission) based on a priority, allows lossless FCoE traffic without affecting classical Ethernet traffic, establishes priority groups using 80.Q tags. 80.Qaz: Enhanced Transmission Selection (ETS): Allows bandwidth allocation based on Priority Groups, Strict Priority for low latency traffic, incorporates DCBX (DCB Exchange Notification part of 80.Qaz) 80.Qau: Congestion Notification (CN): allows for push congestion at the edge of the network Classic design, without CNA LAN n SAN A SAN B n LAN Single hop FCoE SAN A SAN B The first Fiber Channel standard has been approved in June 009 (INCITS/ANSI T FC-BB-5). FC_BB_E defines the means by which Fiber Channel frames are transported by a lossless Ethernet infrastructure. B. Adoption and benefits FCoE drastically changes the Data Center architecture. Its adoption starts from network interfaces, since a new Ethernet NIC, known as CNA (Converged Network Adapter), is needed. The picture in Fig. shows how the CNA preserves the upper layer infrastructure. From the OS perspective nothing changes, it s like having two 0 Gigs Ethernet NIC and two 8 Gigs FC HBA. Figure 3. Network access model evolution in FCoE singlehop scenario, source Cisco Sytems The next picture (fig. 4) shows a general multi hop design model. In this scenario, it s possible to design a Fabric Fiber Channel over a lossless Ethernet infrastructure, completely avoiding the use of classic Fiber Channel switches. 0 GBE 4/8 GB FC 0 GB FCoE Si Backbone - Aggregation Si Backbone Multi hop FCoE Traffic Separation Access Switch FCoE Storage FC Storage Figure. Coveverged Network Adapter, Source SNIA Education New network devices that support the FCoE standard are able to interface the legacy FC infrastructure. Hence they allow a smooth migration path and preserve previous investments in FC Fabrics. The network access model, in the FCoE single hop scenario, changes as the following fig. 3 shows. Although the single hop arrangement is a first step in the FCoE evolution, it allows dramatic reduction of NICs, HBAs, cabling and switches installed in the Data Center. Figure 4. Network access model evolution in FCoE multihop scenario, Icons source Cisco Sytems This is an emerging design model, which places the traffic management issues within the Data Center core network. The actual benefits of FCoE adoption in the Data Center, obviously depend on the specific case requirements and constraints, however, several studies and business cases [7] show that the return of investment can be typically measured in terms of: Networking costs (NIC, Switch, etc) Cabling costs Infrastructure costs (spaces, racks,..) Power Consumption 73

Furthermore, the adoption of FCoE improves the companies agility, in terms of time to deploy new applications and time to deploy new servers. Finally, some benefits are concerning the network reliability due to the reduction of devices and required connections. It should be noted that FCoE adoption takes place in two phases: collapsed access and converged networks. III. TRILL AND FABRIC PATH Cloud computing and HPC are the main drivers for highly-dense and highly virtualized data center. The designers of Data Center network architectures now have to face the increasing complexity, the need for greater workload mobility (VM) and the need to reduce energy consumption. New traffic patterns are emerging, primarily from clientserver or, as usually called, north-to-south flows, to a combination of client-server and server-server or east-towest plus north-to-south streams [3][3]. In order to accommodate an increasing level of application flexibility in high-end infrastructures, new key attributes for network architectures in the Data Center have to be considered: scalability and flexibility at the hardware level, simplicity in design and operations, and increase in current performance and resiliency. Figure 5. Traditional three-tier fabric architecture, Source Lippis [3] The classic Data Center s design model is based on a multi layer design (fig. 5) with all the layer 3 features and oversubscription concentrated in the aggregation/core layers. This classic design model doesn t support the emerging key attributes of Data Center network; it creates dramatic limits to workload mobility and east-to-west traffic patterns flows. IP address space segmentation is a barrier to workload mobility and in a scenario with thousand or tens of thousands of servers, where east-west bandwidth demand can be significant, oversubscription is not allowed. In particular, the Spanning Tree Protocol (STP) is very ineffective, it s not deterministic, leaves some links unused, doesn t support multipath, has long re-convergence time, brings confusion with many STP modes. Finally STP limits dramatically the scalability of the network in the combination of north-south alignment with which we need to support the growing east-to-west traffic flows. In order to address scalability and workload mobility requirements, a common and larger Ethernet domain that contains typical attributes of IP routing is being proposed. These studies address the need for a typical scalable and reliable wide area networking architecture in an Ethernet world. IETF is working on a new protocol called TRILL (TRansparent Interconnection of a Lot of Links) as IEEE is working on Shortest Path Bridging (SPB). TRILL is a Layer multipath protocol that include some typical Layer 3 features. RBridges are Layer devices implementing TRILL and running a link state protocol (IS-IS). The RBridges have the optimal path to unicast destination and the network doesn t have be loop-free. To mitigate temporary loop issues, RBridges take forward decisions based on a header with the hop count. Also, TRILL supports multipath in order to ensure further scalability. Neither TRILL nor SPB are ratified standards, some vendors are implementing private versions of TRILL. Brocade is releasing Virtual Cluster Switching, Cisco has released FabricPath on its Nexus switches. FabricPath is a superset of TRILL and can be seen like a TRILL precursor. FabricPath, uses routing techniques such as building a routing table of different devices in a network. It is based on routing protocol (IS-IS), which calculates paths that packets can traverse through the network. What is being added to FabricPath is the ability for the control plane to know the topology of the network and select different routes for the traffic to flow. FabricPath can further use multiple routes simultaneously so that traffic can span across multiple paths. In other words, as defined by [3], FabricPath is link aggregation on steroids. When using STP between two upstream switch chassis, one path is active and one is standby. To address this restriction, companies such as Cisco offered vpc, which allows link aggregation between two chassis with both links active. What FabricPath s multipathing enables to scale link aggregations up to 6 different chassis. This is essential as network design absolutely changes when link aggregation scales from or then to 6 links. The new TRILL-based approach is completely different from the scenario imposed by multilayer hierarchically model with STP adoption (fig. 6). Figure 6. Evolution of fabric achitecture in FabricPath scenario, Source Lippis [3] TRILL, like FabricPath, gives the opportunity to network designers to build very large, broad, scalable topologies, without having to build multiple tiers, transforming the Data Center architecture from a hierarchical architecture to a flat topology where only one hop separates any node. This can drastically change our view of the worldwide transport network (aka the Internet), actually based only on IP and can also introduce good news also from a latency point of view. By deploying the FabricPath technology (TRILL in the 74

next future), the Data Center network will be more resilient, performance, predictable and easy to manage. Also, in terms of workload mobility, FabricPath allows IT architects to scale more broadly by suppressing many of the IP subnetting barriers and thus allowing a VLAN to dynamically span across the entire physical infrastructure (see fig. 7). There will be no constraints affecting the mobility of networked entities and hence the flexibility and scalability will be significantly enhanced. which destinations are MAC addresses, and next hops are IP addresses. OTV simply maps MAC address destinations to IP next hops that are reachable through the network cloud. Traffic destined for a particular MAC address is encapsulated in IP and carried through the IP cloud to its MAC-address routing next hop [4] Figure 7. Evolution of VLAN span in FabricPath scenario, Source Lippis [3] IV. OVERLAY TRANSPORT VIRTUALIZATION The emerging requirements cited in the previous sections can be stretched, by considering two or more Data Centers in a long distance scenario virtualized like a single one, in order to see their resources just belonging to a single pool. The perspective of having a virtualized Data Center active-active is very interesting in order to grant business continuity (Disaster Avoidance) and to implement cloud oriented infrastructures. Features like Virtual Machines (VM) seamless mobility (aka VMware vmotion) will be able to optimally arrange the VM geographic workload. In order to implement an architecture suitable for long distance vmotion, we need to have different ingredients: a common Layer domain (Layer adjacency), enough bandwidth, shared storage, etc. If we focuses on the Layer issues, the technical requirement to have Layer adjacency is simply the extension of the need to allowing a VLAN to span across within a single physical Data Center, to the need to allowing a VLAN to span across more physical Data Centers, dynamically. To achieve Layer adjacency between more physical Data Centers, in a long distance scenario, Cisco has developed a new protocol called OTV (Overlay Transport Virtualization). With OTV, Cisco focuses the attention on the following attributes[4]: LAN Extension (Extend the same VLAN across Data Centers, to virtualize servers and applications) Storage Extension: Providing applications access to storage locally, as well as remotely with desirable storage attributes Routing Optimization: Routing users to the data center where the application resides while keeping symmetrical routing in consideration for IP services (e.g. Firewall) Application Mobility: Enablers to extend applications across data centers (e.g. VMware VMotion) From a Data Plane point of view, OTV can be viewed as a MAC-address routing facility on an IP infrastructure, in Figure 8. OTV working model, source Cisco Sytems From the Control Plane point of view, OTV does not rely on traffic flooding to propagate reachability information for MAC addresses. OTV does it in-house. Flooding of unknown traffic is suppressed by OTV. Address Resolution Protocol (ARP) traffic is forwarded only in a controlled manner, and Spanning tree Bridge Protocol Data Units (BPDUs) are not forwarded. OTV proactively advertises MAC reachability between the OTV edge devices and MAC addresses advertised in the background once OTV has been configured. Which is really effective in preserving most of the resiliency, scalability, multipathing, and failure-isolation characteristics of a Layer 3 network [4]. Despite OTV Control Plane is based on IS-IS, OTV doesn't require complex configurations or need to understand new protocols. But, why do we have to use OTV when there are other protocols capable of providing Layer adjacency? Ethernet over MPLS (EoMPLS), VPLS and A-VPLS are able to provide a Pseudo Wire (PW) functionality also over wide area distributed computing infrastructures [0][]. A first notable difference is that all three are PW service protocols and use a MPLS infrastructure to build the PWs where OTV can ride any IP networks. Another fundamental difference between OTV and EoMPLS or VPLS, is that OTV is not a PW protocol, OTV uses an advanced Control Plane in order to address the fundamental LAN Extension challenges [4] (site independence, transport independence, multihoming and EE loop prevention, bandwidth utilization with replication, load balancing, and path diversity, scalability and topology independence, VLAN and MAC address scalability, complex operations). OTV brings many improvements that customers have been asking for in other Data Center Infrastructure solutions, namely simpler configuration, dynamic MAC advertisement rather than traditional flooding, ARP optimizations and transport agnosticism [5]. Several vendors have certified architectures for implementing long distance vmotion in cloud services perspective. There are more examples, one of them is built by Cisco, Vmware and Netapp. The following case depicted 75

in fig. 9 gave some very interesting results with a 00km distance between two datacenter locations. Figure 9. Long distance vmotion validation tests, source Cisco Sytems V. CONCLUSIONS In the previous sections we have discussed some of the new Data Center networking technologies, trends and design evolution. All of them address a strong emerging set of needs coming from business and IT requirements that impacts on Data Centers networking architectures. The needs to have more agile, efficient and secure, ICT Infrastructures, and consequently, the growing of the adoption of the virtualization and Cloud Computing architectures, suggests the evolution of the current technologies and architecture of the networking within the Data Centers. The challenges for the infrastructure of networking in Data Centers are concerning: consolidation, cost reduction, reduction in the quantity of interfaces, devices, cables, and power consumption simplify the management and increase the automation supporting the Clustering and the Workload Mobility (vmotion) in short and long distance mode even in high end Data Center supporting new traffic pattern flows In order to address the emerging needs, the networking technologies in the Data Centers are going to change as the Ethernet architecture evolves through the development of three new technologies, FCoE, TRILL (FabricPath) and OTV. Thanks to FCoE it's possible to address most of consolidation opportunities, starting the journey to unified I/O. FabricPath, and in the next future TRILL, permit to have a new design of Data Center that allows to support emerging traffic patterns like east-to-west flows and allows a VLAN to span across the entire physical infrastructure, dynamically. This increases the scalability, the automation and supports workload mobility. OTV allows achieving Layer adjacency between different Data Centers, to support geographic workload mobility and Clustering. In the above scenario we can see the two emerging key factors. Ethernet is more and more the premium connectivity technology, through the evolution of its features is replacing the other Layer technologies in terms of transmission platform. All the other protocols will disappear or will survive transported over Ethernet. In order to transform Ethernet in the networking platform of the Data Center there is a strong evolutionary pressure in its classic features. Indeed, Ethernet needs to become more scalable, predictable, simple to configure and flexible. It's very important to remark that Ethernet needs to integrate some typical Layer 3 features, in its Control Plane. In this scenario the routing protocol more suitable to implement the new features of the Ethernet Control Plane is IS-IS that is very attractive for its capability of carrying MAC address information, that s why it's also the foundation of the OTV s Control Plane. REFERENCES [] S.Gai, T.Salli and R:Anderson Cisco Unified Computing System, CiscoPress, 00 [] S.Gai and C.DeSanti, I/O Consolidation in the Data Center, TRILL protocol documents approved IETF standards but not yet published as RFCs): [3] N.J. Lippis, A simpler Data Center Fabric Emerges, http://lippisreport.com/00/07/a-simpler-data-center-fabric-emergesfor-the-age-of-massively-scalable-data-centers/, June 00 [4] Cisco Overlay Transport Virtualization Technology Introduction and Deployment Considerations, http://www.cisco.com/en/us/docs/solutions/enterprise/data_center/ DCI/whitepaper/DCI_.html, Cisco White Paper,0 [5] R. Perlman and J. Touch, Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement, IETF RFC 5556, Jun 009 [6] R-Perlman,D-Eastlake, D.G.Dutt, S.Gai and A.Ghanwani RBridges: Base Protocol Specification, IETF RFC 5556, March 00 [7] Randy Perry, Lucinda Borovick, The ROI o f Converged Networking Using Unif ied Fabric, www.cisco.com/en/.../white_paper_roi_converged_nw.pdf, IDC March 0. [8] J.L Huffred, Fiber Channel over Ethernet, http://www.snia.org/education/tutorials/007/fall/networking/johnhu fferd_fiber_channel_over_ethernet.pdf, SNIA Education, 008 [9] http://media.netapp.com/documents/wp-app-mobility.pdf,, Cisco, March 00 [0] F. Palmieri, S. Pardi, Towards a federated Metropolitan Area Grid environment: The SCoPE network-aware infrastructure, Future Generation Computer Systems 6(8), pp. 4-56, 00. [] F. Palmieri, Introducing Virtual Private Overlay Network services in large scale Grid infrastructures, Journal Of Computers, Vol., No., April 007. [] Technology Comparison: Cisco Overlay Transport Virtualization and Virtual Private LAN Service as Enablers of LAN Extensions,Cisco White Paper, 00 [3] Nick Allen, FCoE Standards, http://wikibon.org, Auh 00 [4] Greg Ferro, Explaining L Multipath, http://etherealmind.com/layer- -multipath-east-west-bandwidth-switch-designs/, March 0. [5] Greg Ferro, Explaining L Multipath, http://etherealmind.com/layer- -multipath-east-west-bandwidth-switch-designs/, March 00. [6] Ron Fuller, How to Stretch VLANs Between Multiple Physical Data Centers, Networkworld Oct 00. 76