Protocols for Data Center Network Virtualization and Cloud Computing
|
|
|
- Kelley Fields
- 10 years ago
- Views:
Transcription
1 Protocols for Data Center Network Virtualization and Cloud Computing Washington University in Saint Louis Saint Louis, MO Tutorial at 19 th Annual International Conference on Advanced Computing and Communications (ADCOM) 2013, Chennai, India October 21,
2 Overview 1. Part I: Network Virtualization 2. Part II: Data Center Bridging 3. Part III: Carrier Ethernet for Data Centers Break 4. Part IV: Virtual Bridging 5. Part V: LAN Extension and Partitioning 2
3 Part I: Network Virtualization 1. Virtualization 2. Why Virtualize? 3. Network Virtualization 4. Names, IDs, Locators 5. Interconnection Devices 3
4 Part II: Data Center Bridging 1. Residential vs. Data Center Ethernet 2. Review of Ethernet devices and algorithms 3. Enhancements to Spanning Tree Protocol 4. Virtual LANs 5. Data Center Bridging Extensions 4
5 Part III: Carrier Ethernet for Data Centers 1. Provider Bridges (PB) or Q-in-Q 2. Provider Backbone Bridges (PBB) or MAC-in-MAC 3. Provider Backbone Bridges with Traffic Engineering (PBB- TE) Note: Although these technologies were originally developed for carriers, they are now used inside multi-tenant data centers (clouds) 5
6 Part IV: Virtual Bridging 1. Virtual Bridges to connect virtual machines 2. IEEE Virtual Edge Bridging Standard 3. Single Root I/O Virtualization (SR-IOV) 4. Aggregating Bridges and Links: VSS and vpc 5. Bridges with massive number of ports: VBE 6
7 Part V: LAN Extension and Partitioning 1. Transparent Interconnection of Lots of Links (TRILL) 2. Network Virtualization using GRE (NVGRE) 3. Virtual extensible LANs (VXLAN) 4. Stateless Transport Tunneling Protocol (STT) 7
8 Part I: Network Virtualization 1. Virtualization 2. Why Virtualize? 3. Network Virtualization 4. Names, IDs, Locators 5. Interconnection Devices 8
9 Virtualization Virtualization means that Applications can use a resource without any concern for where it resides, what the technical interface is, how it has been implemented, which platform it uses, and how much of it is available. -Rick F. Van der Lans in Data Virtualization for Business Intelligence Systems 9
10 5 Reasons to Virtualize 1. Sharing: Break up a large resource Large Capacity or high-speed E.g., Servers 10Gb 2. Isolation: Protection from other tenants E.g., Virtual Private Network 3. Aggregating: Combine many resources in to one, e.g., storage 4. Dynamics: Fast allocation, Change/Mobility, load balancing, e.g., virtual machines 5. Ease of Management Easy distribution, deployment, testing 10 Switch Switch Switch Switch
11 Advantages of Virtualization Minimize hardware costs (CapEx) Multiple virtual servers on one physical hardware Easily move VMs to other data centers Provide disaster recovery. Hardware maintenance. Follow the sun (active users) or follow the moon (cheap power) Consolidate idle workloads. Usage is bursty and asynchronous. Increase device utilization Conserve power Free up unused physical resources Easier automation (Lower OpEx) Simplified provisioning/administration of hardware and software Scalability and Flexibility: Multiple operating systems Ref: Ref: K. Hess, A. Newman, "Practical Virtualization Solutions: Virtualization from the Trenches," Prentice Hall, 2009, ISBN:
12 Virtualization in Computing Storage: Virtual Memory L1, L2, L3,... Recursive Virtual CDs, Virtual Disks (RAID), Cloud storage Computing: Virtual Desktop Virtual Server Virtual Datacenter Thin Client VMs Cloud Networking: Plumbing of computing Virtual Channels, Virtual LANs, Virtual Private Networks 12
13 Network Virtualization 1. Network virtualization allows tenants to form an overlay network in a multi-tenant network such that tenant can control: 1. Connectivity layer: Tenant network can be L2 while the provider is L3 and vice versa 2. Addresses: MAC addresses and IP addresses 3. Network Partitions: VLANs and Subnets 4. Node Location: Move nodes freely 2. Network virtualization allows providers to serve a large number of tenants without worrying about: 1. Internal addresses used in client networks 2. Number of client nodes 3. Location of individual client nodes 4. Number and values of client partitions (VLANs and Subnets) 3. Network could be a single physical interface, a single physical machine, a data center, a metro, or the global Internet. 4. Provider could be a system owner, an enterprise, a cloud provider, or a carrier. 13
14 Levels of Network Virtualization NIC Bridge Router L2 Network L3 Network Data Center Networks consist of: Network Interface Card (NIC) L2 Links - L2 Bridges - L2 Networks - L3 Links - L3 Routers - L3 Networks Data Centers Global Internet. Each of these needs to be virtualized 14
15 Network Virtualization Techniques Entity Partitioning Aggregation/Extension/Interconnection** NIC SR-IOV MR-IOV Switch VEB, VEPA VSS, VBE, DVS, FEX L2 Link VLANs LACP, Virtual PortChannels L2 Network using L2 VLAN PB (Q-in-Q), PBB (MAC-in-MAC), PBB-TE, Access-EPL, EVPL, EVP-Tree, EVPLAN L2 Network using L3 NVO3, VXLAN, NVGRE, STT MPLS, VPLS, A-VPLS, H-VPLS, PWoMPLS, PWoGRE, OTV, TRILL, LISP, L2TPv3, EVPN, PBB-EVPN Router VDCs, VRF VRRP, HSRP L3 Network using L1 GMPLS, SONET L3 Network using MPLS, GRE, MPLS, T-MPLS, MPLS-TP, GRE, PW, IPSec L3* PW, IPSec Application ADCs Load Balancers *All L2/L3 technologies for L2 Network partitioning and aggregation can also be used for L3 network partitioning and aggregation, respectively, by simply putting L3 packets in L2 payloads. **The aggregation technologies can also be seen as partitioning technologies from the provider point of view. 15
16 Names, IDs, Locators Name: John Smith ID: Locator: 1234 Main Street Big City, MO USA Locator changes as you move, ID and Names remain the same. Examples: Names: Company names, DNS names (Microsoft.com) IDs: Cell phone numbers, 800-numbers, Ethernet addresses, Skype ID, VOIP Phone number Locators: Wired phone numbers, IP addresses 16
17 Interconnection Devices LAN= Collision Domain H H B H H Extended LAN =Broadcast domain Router Application Transport Network Datalink Physical Gateway Router Bridge/Switch Repeater/Hub Application Transport Network Datalink Physical 17
18 Interconnection Devices (Cont) Repeater: PHY device that restores data and collision signals Hub: Multiport repeater + fault detection and recovery Bridge: Datalink layer device connecting two or more collision domains. MAC multicasts are propagated throughout extended LAN. Router: Network layer device. IP, IPX, AppleTalk. Does not propagate MAC multicasts. Switch: Multiport bridge with parallel paths These are functions. Packaging varies. No CSMA/CD in 10G and up No CSMA/CD in practice now even at home or at 10 Mbps 18
19 Fallacies Taught in Networking Classes 1. Ethernet is a local area network (Local < 2km) 2. Token ring, Token Bus, and CSMA/CD are the three most common LAN access methods. 3. Ethernet uses CSMA/CD. 4. Ethernet bridges use spanning tree for packet forwarding. 5. Ethernet frames are limited to 1518 bytes. 6. Ethernet does not provide any delay guarantees. 7. Ethernet has no congestion control. 8. Ethernet has strict priorities. Ethernet has changed. All of these are now false or are becoming false. 19
20 Summary of Part I 1. Virtualization allows applications to use resources without worrying about its location, size, format etc. 2. Ethernet s use of IDs as addresses makes it very easy to move systems in the data center Keep traffic on the same Ethernet 3. Cloud computing requires Ethernet to be extended globally and partitioned for sharing by a very large number of customers who have complete control over their address assignment and connectivity 4. Many of the previous limitations of Ethernet have been overcome in the last few years. 20
21 Part II: Data Center Bridging 1. Residential vs. Data Center Ethernet 2. Review of Ethernet devices and algorithms 3. Enhancements to Spanning Tree Protocol 4. Virtual LANs 5. Data Center Bridging Extensions 21
22 Residential vs. Data Center Ethernet Residential Distance: up to 200m Scale: Few MAC addresses 4096 VLANs Protection: Spanning tree Path determined by spanning tree Simple service Priority Aggregate QoS No performance/error monitoring (OAM) 22 Data Center/Cloud θ No limit θ θ θ θ θ θ θ Millions of MAC Addresses Millions of VLANs Q-in-Q Rapid spanning tree, (Gives 1s, need 50ms) Traffic engineered path Service Level Agreement. Rate Control. Need per-flow/per-class QoS Need performance/ber
23 Link Aggregation Control Protocol (LACP) Switch 1 Switch 2 Switch 1 IEEE 802.1AX-2008/IEEE 802.3ad-2000 Allows several parallel links to be combined as one link 3 1Gbps = 3 Gbps Allows any speed links to be formed Allows fault tolerance Combined Link remains connected even if one of the member links fails Several proprietary extensions. E.g., aggregate links to two switches which act as one switch. Ref: Enterasys, Enterasys Design Center Networking Connectivity and Topology Design Guide, 2013, 23 Switch 2A Switch 2B
24 Spanning Tree Algorithm Helps form a tree out of a mesh topology All bridges multicast to All bridges My ID. 64-bit ID = 16-bit priority + 48-bit MAC address. Root ID My cost to root The bridges update their info using Dijkstra s algorithm and rebroadcast Initially all bridges are roots but eventually converge to one root as they find out the lowest Bridge ID. On each LAN, the bridge with minimum cost to the root becomes the Designated bridge All ports of all non-designated bridges are blocked. 24
25 Enhancements to STP A topology change can result in 1 minute of traffic loss with STP All TCP connections break Rapid Spanning Tree Protocol (RSTP) IEEE 802.1w-2001 incorporated in IEEE 802.1D-2004 One tree for all VLANs Common spanning tree Many trees Multiple spanning tree (MST) protocol IEEE 802.1s-2002 incorporated in IEEE 802.1Q-2005 One or more VLANs per tree. 25
26 MSTP (Multiple Spanning Tree) Switch 1 Switch 2 Switch 3 MSTP (Multiple STP) IEEE 802.1s-2002 incorporated in IEEE 802.1Q-2005 Each tree serves a group of VLANs. A bridge port could be in forwarding state for some VLANs and blocked state for others. 26
27 IS-IS Protocol Intermediate System to Intermediate System (IS-IS) is a protocol to build routing tables. Link-State routing protocol => Each nodes sends its connectivity (link state) information to all nodes in the network Dijkstra s algorithm is then used by each node to build its routing table. Similar to OSPF (Open Shortest Path First). OSPF is designed for IPv4 and then extended for IPv6. IS-IS is general enough to be used with any type of addresses OSPF is designed to run on the top of IP IS-IS is general enough to be used on any transport Adopted by Ethernet Ref: 27
28 Shortest Path Bridging IEEE 802.1aq-2012 Allows all links to be used Better CapEx IS-IS link state protocol (similar to OSPF) is used to build shortest path trees for each node to every other node within the SPB domain Equal-cost multi-path (ECMP) used to distribute load Switch Server1 Switch Switch Aggregation Switch Switch Switch Access Server2 Server3 Server4 Ref: 28
29 What is a LAN? Server Client 1 Bridge Client n Router LAN 1 LAN 2 Router LAN = Single broadcast domain = Subnet No routing between members of a LAN Routing required between LANs 29
30 Virtual LAN R S S S R Virtual LAN = Broadcasts and multicast goes only to the nodes in the virtual LAN LAN membership defined by the network manager Virtual 30
31 IEEE 802.1Q Tag Tag Protocol Identifier (TPI) Priority Code Point (PCP): 3 bits = 8 priorities 0..7 (High) Canonical Format Indicator (CFI): 0 Standard Ethernet, 1 IBM Token Ring format (non-canonical or non-standard) CFI now replaced by Drop Eligibility Indicator (DEI) VLAN Identifier (12 bits 4095 VLANs) Switches forward based on MAC address + VLAN ID Unknown addresses are flooded. Untagged Frame Tagged Frame DA SA T/L Data CRC 32b IEEE 802.1Q-2011 Header DA SA TPI Priority CFI/DEI VLAN ID T/L Data CRC 48b 48b 16b 3b 1b 12b 16b 32b Ref: Canonical vs. MSB Addresses, Ref: G. Santana, Data Center Virtualization Fundamentals, Cisco Press, 2014, ISBN:
32 Link Layer Discovery Protocol (LLDP) IEEE 802.1AB-2009 Neighbor discovery by periodic advertisements Every minute a LLC frame is sent on every port to neighbors LLDP frame contains information in the form of Type-Length-Value (TLV) Types: My Chassis ID, My Port ID, Time-to-live, Port description (Manufacturer, product name, version), Administratively assigned system name, capabilities, MAC address, IP Address, Power-via-MDI, Link aggregation, maximum frame size, DA SA Len LLC Type Len Value Type Len Value CRC Ref: Extreme Networks, Link Layer Discovery Protocol (LLDP), Ref: M. Srinivasan, Tutorial on LLDP, Ref: 32
33 Data Center Bridging Goal: To enable storage traffic over Ethernet Four Standards: Priority-based Flow Control (IEEE 802.1Qbb-2011) Enhanced Transmission Selection (IEEE 802.1Qaz-2011) Congestion Control (IEEE 802.1Qau-2010) Data Center Bridging Exchange (IEEE 802.1Qaz-2011) Ref: M. Hagen, Data Center Bridging Tutorial, 33
34 Ethernet Flow Control: Pause Frame Switch 1 Switch 2 Pause Defined in IEEE 802.3x A form of on-off flow control. A receiving switch can stop the adjoining sending switch by sending a Pause frame. Stops the sender from sending any further information for a time specified in the pause frame. The frame is addressed to a standard (well-known) multicast address. This address is acted upon but not forwarded. Stops all traffic. Causes congestion backup. Ref: 34
35 Priority-based Flow Control (PFC) Priority 0 Priority 1 Pause Priority 7 IEEE 802.1Qbb-2011 IEEE 802.1Qbb-2011 allows any single priority to be stopped. Others keep sending Ref: J. L. White, Technical Overview of Data Center Networks, SNIA, 2013, tutorials/2012/fall/networking/josephwhite_technical%20overview%20of%20data%20center%20networks.pdf 35
36 Enhanced Transmission Selection IEEE 802.1Qaz-2011 Goal: Guarantee bandwidth for applications sharing a link Traffic is divided in to 8 classes (not priorities) The classes are grouped. Standard requires min 3 groups: 1 with PFC (Storage with low loss), 1 W/O PFC (LAN), 1 Strict Priority (Inter-process communication and VOIP with low latency) Groups Classes Group 1 Group 2 Group LAN Best Effort Storage Low Loss Inter-Process Comm+VOIP Low Delay
37 Class Group 1 Class Group 2 Class Group 3 Transmit Queue 0 Transmit Queue 1 Transmit Queue 2 Transmit Queue 3 Transmit Queue 4 Transmit Queue 5 Transmit Queue 6 ETS (Cont) Transmit Queue 7 20 Bandwidth allocated per class group in 1% increment but 10% precision (±10% error). Max 75% allocated Min 25% best effort Fairness within a group All unused bandwidth is available to all classes wanting more bandwidth. Allocation algorithm not defined. Example: Group 3=20%, Group 2=30% t=1 t=2 t=3 t=1 t=2 t=3
38 Quantized Congestion Notification (QCN) Source Switch Switch Destination Switch Destination IEEE 802.1Qau-2010 Dynamic Congestion Notification A source quench message is sent by the congested switch direct to the source. The source reduces its rate for that flow. Sources need to keep per-flow states and control mechanisms Easy for switch manufacturers but complex for hosts. Implemented in switches but not in hosts Not effective. The source may be a router in a subnet and not the real source Router will drop the traffic. QCN does not help in this case. Ref: I. Pepelnjak, DCB Congestion Notification (802.1Qau), 38
39 DCBX Data Center Bridging exchange, IEEE 802.1Qaz-2011 Uses LLDP to negotiate quality metrics and capabilities for Priority-based Flow Control, Enhanced Transmission Selection, and Quantized Congestion Notification New TLV s Priority group definition Group bandwidth allocation PFC enablement per priority QCN enablement DCB protocol profiles FCoE and iscsi profiles 39
40 Summary of Part II 1. Ethernet s use of IDs as addresses makes it very easy to move systems in the data center Keep traffic on the same Ethernet 2. Spanning tree is wasteful of resources and slow. Ethernet now uses shortest path bridging (similar to OSPF) 3. VLANs allow different non-trusting entities to share an Ethernet network 4. Data center bridging extensions reduce the packet loss by enhanced transmission selection and Priority-based flow control 40
41 Part III: Carrier Ethernet for Data Centers 1. Provider Bridges (PB) or Q-in-Q 2. Provider Backbone Bridges (PBB) or MAC-in-MAC 3. Provider Backbone Bridges with Traffic Engineering (PBB- TE) Note: Although these technologies were originally developed for carriers, they are now used inside multi-tenant data centers (clouds) 41
42 Ethernet Provider Bridge (PB) Customer A VLANs CE PE Service Provider S-VLAN 1 PE CE Customer A VLANs Customer B VLANs CE PE S-VLAN 2 IEEE 802.1ad-2005 incorporated in IEEE 802.1Q-2011 Problem: Multiple customers may have the same VLAN ID. How to keep them separate? Solutions: 1. VLAN translation: Change customer VLANs to provider VLANs and back 2. VLAN Encapsulation: Encapsulate customer frames Ref: D. Bonafede, Metro Ethernet Network, Ref: P. Thaler, et al., IEEE 802.1Q, IETF tutorial, March , 42 PE CE Customer B VLANs 1-100
43 Provider Bridge (Cont) Q-in-Q Encapsulation: Provider inserts a service VLAN tag VLAN translation Changes VLANs using a table Allows 4K customers to be serviced. Total 16M VLANs 8 Traffic Classes using Differentiated Services Code Points (DSCP) for Assured Forwarding S-Tag C-DA C-SA Type S-VID Type C-VID Type Payload 88A b 48b 16b 16b 16b 16b 16b Priority CFI S-VLAN ID 3b 1b 12b 43
44 Provider Backbone Network (PBB) Subscriber Subscriber Provider Backbone Provider Subscriber Subscriber Provider Provider Subscriber Problem: Number of MAC addresses passing through backbone bridges is too large for all core bridge to remember Broadcast and flooded (unknown address) frames give unwanted traffic and security issues Solution: IEEE 802.1ah-2008 now in 802.1Q-2011 Add new source/destination MAC addresses pointing to ingress backbone bridge and egress backbone bridge Core bridges only know edge bridge addresses 44
45 MAC-in in-mac Frame Format Provider backbone edge bridges (PBEB) forward to other PBEB s and learn customer MAC addresses PB core bridges do not learn customer MACs B-DA = Destination backbone bridge address Determined by Customer Destination Address Backbone VLANs delimit the broadcast domains in the backbone Edge Edge Core Core Backbone Edge Edge B-DA B-SA Type B-VID Type I-SID C-DA C-SA Type S-VID Type C-VID Type Payload 88A8 88E7 88A b 48b 16b 16b 16b 32b 48b 48b 16b 16b 16b 16b 16b PBB Core switches forward based on Backbone Destination Bridge Address and Backbone-VLAN ID (60 bits) Similar to 802.1ad Q-in-Q. Therefore, same EtherType. I-Tag 45
46 PBB Service Instance Service instance ID (I-SID) indicates a specific flow All frames on a specific port, or All frames on a specific port with a specific service VLAN, or All frames on a specific port with a specific service VLAN and a specific customer VLAN SID Definition B-VLAN 1 Port Port 2, S-VLAN= Port 2, S-VLAN= Port 2, S-VLAN=30, C-VLAN= Port 3, S-VLAN=40, C-VLAN= Port 1 Port 2 Port 3 Service Instance Mapping B-VLAN=1 B-VLAN=3 B-VLAN=6 B-VLAN=4
47 MAC-in in-mac (Cont) SID A PBEB PBEB SID B PBEB SID B Each Backbone VLANs (B-VLAN) can carry multiple services 24-bit SID 2 24 Service Instances in the backbone I-Tag format: I-Tag not used in the core. Includes C-DA+C-SA UCA=1 Use customer addresses (used in CFM) Priority Code Point (I-PCP) SID A SID B SIDC SID D PBEB Drop Eligibility Indicator (I-DEI) PBEB Use Customer Address (UCA) B-VLAN1 Reserved 1 B-VLAN2 47 Reserved 2 PBB Core Bridge Service Instance ID (I-SID) PBEB Customer Destination Address (C-DA) SID C SID D Customer Source Address (C-SA) 3b 1b 1b 1b 2b 24b 48b 48b
48 Connection Oriented Ethernet Connectionless: Path determined at forwarding Varying QoS Connection Oriented: Path determined at provisioning Path provisioned by management Deterministic QoS No spanning tree, No MAC address learning, Frames forwarded based on VLAN Ids and Backbone bridges addresses Path not determined by customer MAC addresses and other customer fields More Secure Reserved bandwidth per EVC Pre-provisioned Protection path Better availability CE PE Working Path Protection Path 48 PE CE
49 VLAN Cross-Connect Connect Cross-connect Circuit oriented Connection Oriented Ethernet with Q-in-Q Forward frames based on VLAN ID and Input port No MAC Learning Input Port VLAN ID Output Port P2P VPN Wholesale VoD VOIP VoD High-Speed Internet Broadcast TV
50 PBB-TE Provider Backbone Bridges with Traffic Engineering (PBB-TE) IEEE 802.1Qay-2009 now in 802.1Q-2011 Provides connection oriented P2P (E-Line) Ethernet service For PBB-TE traffic VLANs: Turn off MAC learning Discard frames with unknown address and broadcasts. No flooding Disable Spanning Tree Protocol. Add protection path switching for each direction of the trunk Switch forwarding tables are administratively populated using management Same frame format as with MAC-in-MAC. No change. 50
51 PBB-TE QoS Guarantees QoS No need for MPLS or SONET/SDH UNI traffic is classified by Port, Service VLAN ID, Customer VLAN ID, priority, Unicast/Multicast UNI ports are policed Excess traffic is dropped No policing at NNI ports. Only remarking, if necessary. Traffic may be marked and remarked at both UNI and NNI Subscriber UNI-C UNI-N Wholesale Service Provider ENNI ENNI Service Provider UNI-N UNI-C Subscriber Classification, Policing, Marking Scheduling, Remarking Shaping 51
52 Ethernet Tagged Frame Format Evolution Original Ethernet IEEE 802.1Q VLAN IEEE 802.1ad PB IEEE 802.1ah PBB or 802.1Qay PBB-TE C-DA C-SA Type Payload C-DA C-SA Type C-VID Type Payload 8100 C-DA C-SA Type S-VID Type C-VID Type Payload 88A B-DA B-SA Type B-VID Type I-SID C-DA C-SA Type S-VID Type C-VID Type Payload 88A8 88E7 88A Tag Type Value Customer VLAN 8100 Service VLAN or Backbone VLAN 88A8 Backbone Service Instance 88C7 52
53 Ref: Bonafede Comparison of Technologies Basic MPLS PB PBB-TE Ethernet Resilience No Protection Fast Reroute SPB/LAG Protection Fast Reroute Security No Circuit Based VLAN Circuit Based Multicast Yes Inefficient Yes No. P2P only QoS Priority Diffserve Diffserve+ Guaranteed Diffserve+ Guaranteed Legacy No Yes (PWE3) No No Services Traffic No Yes No Yes Engineering Scalability Limited Complex Q-in-Q Q-in-Q+ Mac-in-MAC Cost Low High Medium Medium OAM No Some Yes Yes 53
54 Summary of Part III 1. PB Q-in-Q extension allows Internet/Cloud service providers to allow customers to have their own VLAN IDs 2. PBB MAC-in-MAC extension allows customers/tenants to have their own MAC addresses and allows service providers to not have to worry about them in the core switches 3. PBB allows very large Ethernet networks spanning over several backbone carriers 4. PBB-TE extension allows connection oriented Ethernet with QoS guarantees and protection 54
55 Part IV: Virtual Bridging 1. Virtual Bridges to connect virtual machines 2. IEEE Virtual Edge Bridging Standard 3. Single Root I/O Virtualization (SR-IOV) 4. Aggregating Bridges and Links: VSS and vpc 5. Bridges with massive number of ports: VBE 55
56 vswitch Problem: Multiple VMs on a server need to use one physical network interface card (pnic) Solution: Hypervisor creates multiple vnics connected via a virtual switch (vswitch) pnic is controlled by hypervisor and not by any individual VM Notation: From now on prefixes p and v refer to physical and virtual, respectively. For VMs only, we use upper case V. VM vnic Hypervisor VM vnic vswitch pnic pswitch VM vnic pm Ref: G. Santana, Datacenter Virtualization Fundamentals, Cisco Press, 2014, ISBN:
57 pm vm1 vnic1 VEB pnic vm2 vnic2 Virtual Bridging pm vm1 vnic1 VEPA pnic vm2 pswitch vnic2 Where should most of the tenant isolation take place? 1. VM vendors: S/W NICs in Hypervisor w Virtual Edge Bridge (VEB)(overhead, not ext manageable, not all features) 2. Switch Vendors: Switch provides virtual channels for inter- VM Communications using virtual Ethernet port aggregator (VEPA): 802.1Qbg (s/w upgrade) 3. NIC Vendors: NIC provides virtual ports using Single-Route I/O virtualization (SR-IOV) on PCI bus 57 pm vm1 vnic1 pnic vm2 Hypervisor vnic2
58 Virtual Edge Bridge IEEE 802.1Qbg-2012 standard for vswitch Two modes for vswitches to handle local VM-to-VM traffic: Virtual Edge Bridge (VEB): Switch internally. Virtual Ethernet Port Aggregator (VEPA): Switch externally VEB could be in a hypervisor or network interface card may learn or may be configured with the MAC addresses VEB may participate in spanning tree or may be configured\ Advantage: No need for the external switch in some cases VM VM VM VEB vswitch pswitch VM VEPA vswitch pswitch VM VM 58
59 Virtual Ethernet Port Aggregator (VEPA) VEPA simply relays all traffic to an external bridge External bridge forwards the traffic. Called Hairpin Mode. Returns local VM traffic back to VEPA Note: Legacy bridges do not allow traffic to be sent back to the incoming port within the same VLAN VEPA Advantages: Visibility: External bridge can see VM to VM traffic. Policy Enforcement: Better. E.g., firewall Performance: Simpler vswitch Less load on CPU Management: Easier Both VEB and VEPA can be implemented on the same NIC in the same server and can be cascaded. Ref: HP, Facts about the IEEE 802.1Qbg proposal, Feb 2011, 6pp., 59
60 Problem: Combining Bridges Number of VMs is growing very fast Need switches with very large number of ports Easy to manage one bridge than port bridges How to make very large switches ~1000 ports? Solutions: Multiple pswitches to form a single switch 1. Distributed Virtual Switch (DVS) 2. Fabric Extension (FEX) 3. Virtual Bridge Port Extension (VBE) 60
61 Distributed Virtual Switch (DVS) VMware idea to solve the scalability issue A centralized DVS controller manages vswitches on many physical hosts DVS decouples the control and data plane of the switch so that each VM has a virtual data plane (virtual Ethernet module or VEM) managed by a centralized control plane (virtual Switch Module or VSM) Appears like a single distributed virtual switch Allows simultaneous creation of port groups on multiple pms Provides an API so that other networking vendors can manage vswitches and vnics 61
62 Fabric Extenders Fabric extenders (FEX) consists of ports that are managed by a remote parent switch 12 Fabric extenders, each with 48 host ports, connected to a parent switch via Gbps interfaces to a parent switch provide a virtual switch with 576 host ports Chassis Virtualization All software updates/management, forwarding/control plane is managed centrally by the parent switch. A FEX can have an active and a standby parent. vswitch Parent Switch vswitch Fabric Extender Fabric Extender Fabric Extender Ref: P. Beck, et al., IBM and Cisco: Together for a World Class Data Center, IBM Red Book, 2013, 654 pp., ISBN: , 62
63 Virtual Bridge Port Extension (VBE) IEEE 802.1BR-2012 standard for fabric extender functions Specifies how to form an extended bridge consisting of a controlling bridge and Bridge Port Extenders Extenders can be cascaded. Some extenders may be in a vswitch in a server hypervisor. All traffic is relayed by the controlling bridge Extended bridge is a bridge. Controlling Bridge Extended Bridge Bridge Port Extender Bridge Port Extender Bridge Port Extender Server Server Server Server VM VM Server 63
64 Summary of Part IV 1. Network virtualization includes virtualization of NICs, Bridges, Routers, and L2 networks. 2. Virtual Edge Bridge (VEB) vswitches switch internally while Virtual Ethernet Port Aggregator (VEPA) vswitches switch externally. 3. Fabric Extension and Virtual Bridge Extension (VBE) allows creating switches with a large number of ports using port extenders (which may be vswitches) 64
65 Part V: LAN Extension and Partitioning LAN 1. Transparent Interconnection of Lots of Links (TRILL) 2. Network Virtualization using GRE (NVGRE) 3. Virtual extensible LANs (VXLAN) 4. Stateless Transport Tunneling Protocol (STT) 65
66 Challenges of LAN Extension Broadcast storms: Unknown and broadcast frames may create excessive flood Loops: Easy to form loops in a large network. STP Issues: High spanning tree diameter: More than 7. Root can become bottleneck and a single point of failure Multiple paths remain unused Tromboning: Dual attached servers and switches generate excessive cross traffic Security: Data on LAN extension must be encrypted 66 Core Aggregation Access Server LAN Extension LAN Core Aggregation Access Server
67 TRILL Transparent Interconnection of Lots of Links Allows a large campus to be a single extended LAN LANs allow free mobility inside the LAN but: Inefficient paths using Spanning tree Inefficient link utilization since many links are disabled Inefficient link utilization since multipath is not allowed. Unstable: small changes in network large changes in spanning tree IP subnets are not good for mobility because IP addresses change as nodes move and break transport connections, but: IP routing is efficient, optimal, and stable Solution: Take the best of both worlds Use MAC addresses and IP routing Ref: RFCs 5556, 6325, 6326, 6327, 6361,
68 TRILL Architecture Routing Bridges (RBridges) encapsulate L2 frames and route them to destination RBridges which decapsulate and forward Header contains a hop-limit to avoid looping RBridges run IS-IS to compute pair-wise optimal paths for unicast and distribution trees for multicast RBridge learn MAC addresses by source learning and by exchanging their MAC tables with other RBridges Each VLAN on the link has one (and only one) designated RBridge using IS-IS election protocol RB2 H1 RB1 RB4 RB3 H2 Ref: R. Perlman, "RBridges: Transparent Routing," Infocom
69 TRILL Encapsulation Format Outer Header TRILL header Original 802.1Q packet Version Res. Multi- Destination Options Length Hops to Live Egress RBridge Ingress RBridge 2b 2b 1b 5b 6b 16b 16b Options For outer headers both PPP and Ethernet headers are allowed. PPP for long haul. Outer Ethernet header can have a VLAN ID corresponding to the VLAN used for TRILL. Priority bits in outer headers are copied from inner VLAN 69
70 TRILL Features Transparent: No change to capabilities. Broadcast, Unicast, Multicast (BUM) support. Auto-learning. Zero Configuration: RBridges discover their connectivity and learn MAC addresses automatically Hosts can be multi-homed VLANs are supported Optimized route No loops Legacy bridges with spanning tree in the same extended LAN 70
71 TRILL: Summary TRILL allows a large campus to be a single Extended LAN Packets are encapsulated and routed using IS-IS routing 71
72 GRE Generic Routing Encaptulation (RFC 1701/1702) Generic X over Y for any X or Y Over IPv4, GRE packets use a protocol type of 47 Optional Checksum, Loose/strict Source Routing, Key Key is used to authenticate the source Recursion Control: # of additional encapsulations allowed. 0 Restricted to a single provider network end-to-end Offset: Points to the next source route field to be used IP or IPSec are commonly used as delivery headers Delivery Header GRE Header Payload Checksum Present Routing Present Key Present Seq. # Present Strict Source Route Recursion Control Flags Ver. # Prot. Type Offset Check sum Key Seq. # Source Routing List 1b 1b 1b 1b 1b 3b 5b 3b 16b 16b 16b 32b 32b Variable 72
73 NVGRE Ethernet over GRE over IP (point-to-point) A unique 24-bit Virtual Subnet Identifier (VSID) is used as the lower 24-bits of GRE key field 2 24 tenants can share Unique IP multicast address is used for BUM (Broadcast, Unknown, Multicast) traffic on each VSID Equal Cost Multipath (ECMP) allowed on point-to-point tunnels Customer Edge Switch Provider Edge Router Provider Core Router(s) Tunnel Provider Network Provider Edge Router Customer Edge Switch Ethernet IP GRE Ethernet = IP GRE Ethernet Ethernet Ref: M. Sridharan, MVGRE: Network Virtualization using GRE, Aug 2013, 73
74 NVGRE (Cont) In a cloud, a pswitch or a vswitch can serve as tunnel endpoint VMs need to be in the same VSID to communicate VMs in different VSIDs can have the same MAC address Inner IEEE 802.1Q tag, if present, is removed. VM VM VM VM VM VM Virtual Subnet X VM VM VM VM VM VM Virtual Subnet X Subnet X Subnet X Subnet X Internet Ref: Emulex, NVGRE Overlay Networks: Enabling Network Scalability, Aug 2012, 11pp., 74
75 VXLAN Virtual extensible Local Area Networks (VXLAN) L3 solution to isolate multiple tenants in a data center (L2 solution is Q-in-Q and MAC-in-MAC) Developed by VMware. Supported by many companies in IETF NVO3 working group Problem: 4096 VLANs are not sufficient in a multi-tenant data center Tenants need to control their MAC, VLAN, and IP address assignments Overlapping MAC, VLAN, and IP addresses Spanning tree is inefficient with large number of switches Too many links are disabled Better throughput with IP equal cost multipath (ECMP) Ref: M. Mahalingam, VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks, draft-mahalingam-dutt-dcops-vxlan-04, May, 8, 2013, 75
76 VXLAN Architecture Create a virtual L2 overlay (called VXLAN) over L3 networks 2 24 VXLAN Network Identifiers (VNIs) Only VMs in the same VXLAN can communicate vswitches serve as VTEP (VXLAN Tunnel End Point). Encapsulate L2 frames in UDP over IP and send to the destination VTEP(s). Segments may have overlapping MAC addresses and VLANs but L2 traffic never crosses a VNI Tenant 3 Virtual L2 Network Tenant 1 Virtual L2 Network Tenant 2 Virtual L2 Network L3 Network 76
77 VXLAN Deployment Example Example: Three tenants. 3 VNIs. 4 Tunnels for unicast. + 3 tunnels for multicast (not shown) VM1-1 VNI 34 VM2-1 VNI 22 VM2-2 VNI 22 VM1-2 VNI 34 VM3-1 VNI 74 VM3-2 VNI 74 VM1-3 VNI 34 Hypervisor VTEP IP1 Hypervisor VTEP IP2 Hypervisor VTEP IP3 L3 Network 77
78 VXLAN Encapsulation Format Outer VLAN tag is optional. Used to isolate VXLAN traffic on the LAN Source VM ARPs to find Destination VM s MAC address. All L2 multicasts/unknown are sent via IP multicast. Destination VM sends a standard IP unicast ARP response. Destination VTEP learns inner-src-mac-to-outer-src-ip mapping Avoids unknown destination flooding for returning responses Dest. VTEP MAC Source VTEP MAC Outer VLAN Dest. VTEP IP Source VTEP IP UDP Header VXLAN Header Dest VM MAC Source VM MAC Tenant VLAN Ethernet Payload Only key fields are shown Flags Res VNI Res 8b 24b 24b 8b 78
79 VXLAN Encapsulation Format (Cont) IGMP is used to prune multicast trees 7 of 8 bits in the flag field are reserved. I flag bit is set if VNI field is valid UDP source port is a hash of the inner MAC header Allows load balancing using Equal Cost Multi Path using L3-L4 header hashing VMs are unaware that they are operating on VLAN or VXLAN VTEPs need to learn MAC address of other VTEPs and of client VMs of VNIs they are handling. A VXLAN gateway switch can forward traffic to/from non- VXLAN networks. Encapsulates or decapsulates the packets. 79
80 VXLAN: Summary VXLAN solves the problem of multiple tenants with overlapping MAC addresses, VLANs, and IP addresses in a cloud environment. A server may have VMs belonging to different tenants No changes to VMs. Hypervisors responsible for all details. Uses UDP over IP encapsulation to isolate tenants 80
81 Stateless Transport Tunneling Protocol (STT) Ethernet over TCP-Like over IP tunnels. GRE, IPSec tunnels can also be used if required. Tunnel endpoints may be inside the end-systems (vswitches) Designed for large storage blocks 64kB. Fragmentation allowed. Most other overlay protocols use UDP and disallow fragmentation Maximum Transmission Unit (MTU) issues. TCP-Like: Stateless TCP Header identical to TCP (same protocol number 6) but no 3-way handshake, no connections, no windows, no retransmissions, no congestion state Stateless Transport (recognized by standard port number). Broadcast, Unknown, Multicast (BUM) handled by IP multicast tunnels Ref: B. Davie and J. Gross, "A Stateless Transport Tunneling Protocol for Network Virtualization (STT)," Sep 2013, 81
82 LSO and LRO Large Send Offload (LSO): Host hands a large chunk of data to NIC and meta data. NIC makes MSS size segments, adds checksum, TCP, IP, and MAC headers to each segment. Large Receive Offload (LRO): NICs attempt to reassemble multiple TCP segments and pass larger chunks to the host. Host does the final reassembly with fewer per packet operations. STT takes advantage of LSO and LRO features, if available. Using a protocol number other than 6 will not allow LSO/LRO to handle STT Host Meta Data Payload LSO LRO L2 Header IP Header TCP Header Segment 82
83 STT Optimizations Large data size: Less overhead per payload byte Context ID: 64-bit tunnel end-point identifier Optimizations: 2-byte padding is added to Ethernet frames to make its size a multiple of 32-bits. Source port is a hash of the inner header ECMP with each flow taking different path and all packets of a flow taking one path No protocol type field Payload assumed to be Ethernet, which can carry any payload identified by protocol type. VM VM vnic vnic vswitch NIC Underlay Network NIC vswitch vnic vnic VM VM 83
84 STT Frame Format 16-Bit MSS 2 16 B = 64K Byte maximum L4 Offset: From the of STT header to the start of encapsulated L4 (TCP/UDP) header Helps locate payload quickly Checksum Verified: Checksum covers entire payload and valid Checksum Partial: Checksum only includes TCP/IP headers IP Header TCP-Like Header STT Header STT Payload Version Flags L4 Offset Reserved Maximum Priority VLAN ID Segment Code Valid Size Point 84 Context ID 8b 8b 8b 8b 16b 3b 1b 64b 12b Checksum Verified Checksum Partial IP Version IPv4 TCP Payload 1b 1b 1b 1b 4b Reserved VLAN ID Padding 16b
85 TCP-Like Header in STT Destination Port: Standard to be requested from IANA Source Port: Selected for efficient ECMP Ack Number: STT payload sequence identifier. Same in all segments of a payload Sequence Number (32b): Length of STT Payload (16b) + offset of the current segment (16b) Correctly handled by NICs with Large Receive Offload (LRO) feature No acks. STT delivers partial payload to higher layers. Higher layer TCP can handle retransmissions if required. Middle boxes will need to be programmed to allow STT pass through Source Port (Random) Dest. Port (Standard) STT Payload Length Sequence Number* Segment Offset Payload Sequence # Ack Number* Data Offset 16b 16b 16b+16b 32b 16b *Different meaning than TCP 85
86 STT Summary STT solves the problem of efficient transport of large 64 KB storage blocks Uses Ethernet over TCP-Like over IP tunnels Designed for software implementation in hypervisors 86
87 Summary of Part V 1. TRILL allows Ethernet to span a large campus using IS-IS encapsulation 2. NVGRE uses Ethernet over GRE for L2 connectivity. 3. VXLAN uses Ethernet over UDP over IP 4. STT uses Ethernet over TCP-like stateless protocol over IP. 87
88 Overall Summary 1. Virtualization allows applications to use resources without worrying about its location, size, format etc. 2. Ethernet s use of IDs as addresses makes it very easy to move systems in the data center Keep traffic on the same Ethernet 3. Cloud computing requires Ethernet to be extended globally and partitioned for sharing by a very large number of customers who have complete control over their address assignment and connectivity and requires rapid provisioning of a large number of virtual NICs and switches 4. Spanning tree is wasteful of resources and slow. Ethernet now uses shortest path bridging (similar to OSPF) 88
89 Overall Summary (Cont) 5. Data center bridging extensions reduce the packet loss by enhanced transmission selection and Priority-based flow control. Make Ethernet suitable for storage traffic. 6. PB Q-in-Q extension allows Internet/Cloud service providers to allow customers to have their own VLAN IDs 7. PBB MAC-in-MAC extension allows customers/tenants to have their own MAC addresses and allows service providers to not have to worry about them in the core switches 8. PBB-TE extension allows connection oriented Ethernet with QoS guarantees and protection 9. Virtual Edge Bridge (VEB) vswitches switch internally while Virtual Ethernet Port Aggregator (VEPA) vswitches switch externally. 89
90 Overall Summary (Cont) 10. SR-IOV technology allows multiple virtual NICs via PCI and avoids the need for internal vswitch. 11. Fabric Extension and Virtual Bridge Extension (VBE) allows creating switches with a large number of ports using port extenders (which may be vswitches) 12. TRILL allows Ethernet to span a large campus using IS-IS encapsulation 13. NVGRE uses Ethernet over GRE for L2 connectivity. 14. VXLAN uses Ethernet over UDP over IP 15. STT uses Ethernet over TCP-like stateless protocol over IP. 90
91 Acronyms ADC Application Delivery Controller API Application Programming Interface ARP Address Resolution Protocol BER Bit Error Rate BUM Broadcast, Unknown, Multicast CapEx Capital Expenditure CD Compact Disk CE Customer Edge CFI Canonical Format Indicator CFM Connectivity Fault Management CPU Central Processing Unit CRC Cyclic Redundancy Check CSMA/CD Carrier Sense Multiple Access with Collision Detection DA Destination Address DCB Data Center Bridging DCBX Data Center Bridging Exchange 91
92 Acronyms (Cont) DEI Drop Eligibility Indicator DNS Domain Name Service DSCP Differentiated Services Code Points DVS Distributed Virtual Switch ECMP Equal-cost multi-path ENNI Ethernet Network to Network Interface EPL Ethernet Private Line ETS Enhanced Transmission Service EVC Ethernet Virtual Channel EVP-Tree Ethernet Virtual Private Tree EVPL Ethernet Virtual Private Line EVPLAN Ethernet Virtual Private LAN EVPN Ethernet Virtual Private Network FCoE Fibre Channel over Ethernet FEX Fabric Extension GB Giga Byte 92
93 Acronyms (Cont) GMPLS Generalized Multi-Protocol Label Switching GRE Generic Routing Encapsulation HSRP Hot Standby Router Protocol IANA Internet Addressing and Naming Authority ID Identifier IEEE Institution of Electrical and Electronic Engineers IETF Internet Engineering Task Force IGMP Internet Group Multicast Protocol IO Input/Output IP Internet Protocol IPSec Secure IP IPv4 Internet Protocol Version 4 IPv6 Internet Protocol Version 6 IS-IS Intermediate System to Intermediate System iscsi Internet Small Computer Storage Interconnect iscsi Internet Small Computer Storage Interconnect 93
94 Acronyms (Cont) kb Kilo Byte LACP Link Aggregation Control Protocol LAN Local Area Network LISP Locator-ID Split Protocol LLDP Link Layer Discovery Protocol LRO Large Receive Offload LSO Large Send Offload MAC Media Access Control MDI Media Dependent Interface MPLS Multi-Protocol Label Switching MR-IOV Multi-Root I/O Virtualization MSB Most Significant Byte MSS Maximum Segment Size MST Multiple spanning tree MSTP Multiple Spanning Tree Protocol MTU Maximum Transmission Unit 94
95 Acronyms (Cont) MVGRE Network Virtualization Using GRE NIC Network Interface Card NNI Network-to-Network Interface NVO3 Network Virtualization Overlay using L3 OAM Operation, Administration, and Management OpEx Operation Expenses OSPF Open Shortest Path First OTV Overlay Transport Virtualization PB Provider Bridge PBB-TE Provider Backbone Bridge with Traffic Engineering PBB Provider Backbone Bridge PBEB Provider Backbone Edge Bridge PCI-SIG PCI Special Interest Group PCI Peripheral Component Interconnect PCIe PCI Express PCP Priority Code Point 95
96 Acronyms (Cont) PE Provider Edge PF Physical Function PFC Priority-based Flow Control PHY Physical Layer pm Physical Machine pnic Physical Network Interface Card PPP Point-to-Point Protocol pswitch Physical Switch PW Pseudo wire PWoGRE Pseudo wire over Generic Routing Encapsulation PWoMPLS Pseudo wire over Multi Protocol Label Switching QCN Quantized Congestion Notification QoS Quality of Service RAID Redundant Array of Independent Disks RBridge Routing Bridge RFC Request for Comments 96
97 Acronyms (Cont) RSTP Rapid Spanning Tree Protocol SA Source Address SDH Synchronous Digital Hierarchy SID Service Identifier SNIA Storage Network Industry Association SONET Synchronous Optical Network SPB Shortest Path Bridging SR-IOV Single Root I/O Virtualization STP Spanning Tree Protocol STT Stateless Transport Tunneling Protocol TCP Transmission Control Protocol TE Traffic Engineering TLV Type-Length-Value TP Transport Protocol TPI Tag Protocol Identifier TRILL Transparent Interconnection of Lots of Links 97
98 Acronyms (Cont) TV Television UCA Use Customer Address UDP User Datagram Protocol UNI User Network Interface VBE Virtual Bridge Port Extension VDC Virtual Device Contexts VEB Virtual Edge Bridge VEM Virtual Ethernet Module VEPA Virtual Ethernet Port Aggregator VF Virtual Function VID VLAN ID VLAN Virtual LAN VM Virtual Machine VNI Virtual Network ID vnic Virtual Network Interface Card VoD Video on Demand 98
99 Acronyms (Cont) VOIP Voice over IP vpc Virtual Port Channels VPLS Virtual Private LAN Service VPN Virtual Private Network VRF Virtual Routing and Forwarding VRRP Virtual Router Redundancy Protocol VSID Virtual Subnet Identifier VSM Virtual Switch Module VSS Virtual Switch System vswitch Virtual Switch VTEP Virtual Tunnel End Point VXLAN Virtual Extensible LAN 99
Multi-Tenant Isolation and Network Virtualization in. Cloud Data Centers
Multi-Tenant Isolation and Network Virtualization in. Cloud Data Centers Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 [email protected] These slides and audio/video recordings of
Virtualization and Software Defined Networking (SDN) for Multi-Cloud Computing
Virtualization and Software Defined Networking (SDN) for Multi-Cloud Computing SDN = OpenFlow SDN = Standard Southbound API SDN = Centralization of control plane SDN = Separation of Control and Data Planes
Networking Issues For Big Data
Networking Issues For Big Data. Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 [email protected] These slides and audio/video recordings of this class lecture are at: http://www.cse.wustl.edu/~jain/cse570-13/
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea ([email protected]) Senior Solutions Architect, Brocade Communications Inc. Jim Allen ([email protected]) Senior Architect, Limelight
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
APPLICATION NOTE 210 PROVIDER BACKBONE BRIDGE WITH TRAFFIC ENGINEERING: A CARRIER ETHERNET TECHNOLOGY OVERVIEW
PROVIDER BACKBONE BRIDGE WITH TRAFFIC ENGINEERING: A CARRIER ETHERNET TECHNOLOGY OVERVIEW By Thierno Diallo, Product Specialist Originally designed as a local-area network (LAN) communication protocol,
Shortest Path Bridging IEEE 802.1aq Overview
Shortest Path Bridging IEEE 802.1aq Overview Don Fedyk IEEE Editor 802.1aq Alcatel-Lucent IPD Product Manager Monday, 12 July 2010 Abstract 802.1aq Shortest Path Bridging is being standardized by the IEEE
How To Make A Network Cable Reliable And Secure
ETHERNET KEPT Provider Link State Bridging Gerard Jacobs Senior Solutions Architect Agenda > Network Visions > Carrier Ethernet > Provider Link State Bridging (PLSB) > Summary Network Visions HYBRID L1
TRILL for Data Center Networks
24.05.13 TRILL for Data Center Networks www.huawei.com enterprise.huawei.com Davis Wu Deputy Director of Switzerland Enterprise Group E-mail: [email protected] Tel: 0041-798658759 Agenda 1 TRILL Overview
Data Networking and Architecture. Delegates should have some basic knowledge of Internet Protocol and Data Networking principles.
Data Networking and Architecture The course focuses on theoretical principles and practical implementation of selected Data Networking protocols and standards. Physical network architecture is described
WHITE PAPER. Network Virtualization: A Data Plane Perspective
WHITE PAPER Network Virtualization: A Data Plane Perspective David Melman Uri Safrai Switching Architecture Marvell May 2015 Abstract Virtualization is the leading technology to provide agile and scalable
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG North Core Distribution Access South North Peering #1 Upstream #1 Series of Tubes Upstream #2 Core Distribution Access Cust South Internet West
Provider Backbone Bridging Traffic Engineering of Carrier Ethernet Services
Provider Backbone Bridging Traffic Engineering of Carrier Ethernet Services Introduction Recently, a number of technologies have emerged for transporting Carrier Ethernet services. One such technology,
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE
CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE EXECUTIVE SUMMARY This application note proposes Virtual Extensible LAN (VXLAN) as a solution technology to deliver departmental segmentation, business
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
Data Center Overlay Technologies
White Paper Data Center Overlay Technologies What You Will Learn In the modern data center, traditional technologies are limiting the speed, flexibility, scalability, and manageability of application deployments.
Ethernet, VLAN, Ethernet Carrier Grade
Ethernet, VLAN, Ethernet Carrier Grade Dr. Rami Langar LIP6/PHARE UPMC - University of Paris 6 [email protected] www-phare.lip6.fr/~langar RTEL 1 Point-to-Point vs. Broadcast Media Point-to-point PPP
Resiliency in Ethernet Based Transport Networks
Resiliency in Ethernet Based Transport Networks Kari Seppänen [email protected] Outline Introduction What is switched Ethernet? Legacy Ethernet Security and Reliability issues Rapid spanning tree protocol
QoS Switching. Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p (GARP/Priorities)
QoS Switching H. T. Kung Division of Engineering and Applied Sciences Harvard University November 4, 1998 1of40 Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p
Avaya VENA Fabric Connect
Avaya VENA Fabric Connect Executive Summary The Avaya VENA Fabric Connect solution is based on the IEEE 802.1aq Shortest Path Bridging (SPB) protocol in conjunction with Avaya extensions that add Layer
Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters
Enterprise Strategy Group Getting to the bigger truth. ESG Lab Review Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Date: June 2016 Author: Jack Poller, Senior
Server and Storage Virtualization. Virtualization. Overview. 5 Reasons to Virtualize
Server and Storage Virtualization. Overview Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 [email protected] These slides and audio/video recordings of this class lecture are at:
ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES
ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES Alastair JOHNSON (AJ) February 2014 [email protected] AGENDA 1. EVPN Background and Motivation 2. EVPN Operations 3. EVPN
Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam
Cloud Networking Disruption with Software Defined Network Virtualization Ali Khayam In the next one hour Let s discuss two disruptive new paradigms in the world of networking: Network Virtualization Software
Enterasys Data Center Fabric
TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed
TRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
ETHERNET VPN (EVPN) OVERLAY NETWORKS FOR ETHERNET SERVICES
ETHERNET VPN (EVPN) OVERLAY NETWORKS FOR ETHERNET SERVICES Greg Hankins RIPE 68 RIPE 68 2014/05/12 AGENDA 1. EVPN Background and Motivation 2. EVPN Operations 3. EVPN
Understanding PBB-TE for Carrier Ethernet
Understanding PBB-TE for Carrier Ethernet Introduction Ethernet is evolving from an enterprise LAN technology to a much more robust, carrier-grade transport technology for metropolitan service networks.
SECURE AVAYA FABRIC CONNECT SOLUTIONS WITH SENETAS ETHERNET ENCRYPTORS
SECURE AVAYA FABRIC CONNECT SOLUTIONS WITH SENETAS ETHERNET ENCRYPTORS AUDIENCE Data networks consultants, Network architects, designers and administrators/ managers, Systems Integrators (SI) and networks
Software Defined Networking Supported by IEEE 802.1Q
Software Defined Networking Supported by IEEE 802.1Q János Farkas, Stephen Haddock, Panagiotis Saltsidis [email protected], [email protected], [email protected] Abstract
Riverstone Networks. Carrier Ethernet Standards Progress. Igor Giangrossi Sr. Systems Engineer, CALA
Riverstone Networks Carrier Ethernet Standards Progress Igor Giangrossi Sr. Systems Engineer, CALA Agenda History Metro Ethernet Forum work IETF work IEEE work Conclusion 2 Ethernet Evolution What do we
Extending Networking to Fit the Cloud
VXLAN Extending Networking to Fit the Cloud Kamau WangŨ H Ũ Kamau Wangũhgũ is a Consulting Architect at VMware and a member of the Global Technical Service, Center of Excellence group. Kamau s focus at
Cloud Networking: Framework and VPN Applicability. draft-bitar-datacenter-vpn-applicability-01.txt
Cloud Networking: Framework and Applicability Nabil Bitar (Verizon) Florin Balus, Marc Lasserre, and Wim Henderickx (Alcatel-Lucent) Ali Sajassi and Luyuan Fang (Cisco) Yuichi Ikejiri (NTT Communications)
Data Center Network Topologies
Data Center Network Topologies. Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 [email protected] These slides and audio/video recordings of this class lecture are at: 3-1 Overview
Juniper / Cisco Interoperability Tests. August 2014
Juniper / Cisco Interoperability Tests August 2014 Executive Summary Juniper Networks commissioned Network Test to assess interoperability, with an emphasis on data center connectivity, between Juniper
EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE
EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE EXECUTIVE SUMMARY Enterprise network managers are being forced to do more with less. Their networks are growing in size and complexity. They need
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
IP Networking. Overview. Networks Impact Daily Life. IP Networking - Part 1. How Networks Impact Daily Life. How Networks Impact Daily Life
Overview Dipl.-Ing. Peter Schrotter Institute of Communication Networks and Satellite Communications Graz University of Technology, Austria Fundamentals of Communicating over the Network Application Layer
VXLAN, Enhancements, and Network Integration
VXLAN, Enhancements, and Network Integration Apricot 2014 - Malaysia Eddie Parra Principal Engineer, Juniper Networks Router Business Unit (RBU) [email protected] Legal Disclaimer: This statement of product
Virtual Private LAN Service on Cisco Catalyst 6500/6800 Supervisor Engine 2T
White Paper Virtual Private LAN Service on Cisco Catalyst 6500/6800 Supervisor Engine 2T Introduction to Virtual Private LAN Service The Cisco Catalyst 6500/6800 Series Supervisor Engine 2T supports virtual
VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure
W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
Ethernet Fabric Requirements for FCoE in the Data Center
Ethernet Fabric Requirements for FCoE in the Data Center Gary Lee Director of Product Marketing [email protected] February 2010 1 FCoE Market Overview FC networks are relatively high cost solutions
ISTANBUL. 1.1 MPLS overview. Alcatel Certified Business Network Specialist Part 2
1 ISTANBUL 1.1 MPLS overview 1 1.1.1 Principle Use of a ATM core network 2 Overlay Network One Virtual Circuit per communication No routing protocol Scalability problem 2 1.1.1 Principle Weakness of overlay
Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco
Cisco Datacenter 3.0 Datacenter Trends David Gonzalez Consulting Systems Engineer Cisco 2009 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Agenda Data Center Ethernet (DCE) Fiber Channel over
Route Discovery Protocols
Route Discovery Protocols Columbus, OH 43210 [email protected] http://www.cse.ohio-state.edu/~jain/ 1 Overview Building Routing Tables Routing Information Protocol Version 1 (RIP V1) RIP V2 OSPF
Analysis of Network Segmentation Techniques in Cloud Data Centers
64 Int'l Conf. Grid & Cloud Computing and Applications GCA'15 Analysis of Network Segmentation Techniques in Cloud Data Centers Ramaswamy Chandramouli Computer Security Division, Information Technology
ConnectX -3 Pro: Solving the NVGRE Performance Challenge
WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3
Unified Fabric: Cisco's Innovation for Data Center Networks
. White Paper Unified Fabric: Cisco's Innovation for Data Center Networks What You Will Learn Unified Fabric supports new concepts such as IEEE Data Center Bridging enhancements that improve the robustness
VPLS Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date 2012-10-30
Issue 01 Date 2012-10-30 HUAWEI TECHNOLOGIES CO., LTD. 2012. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Designing and Developing Scalable IP Networks
Designing and Developing Scalable IP Networks Guy Davies Telindus, UK John Wiley & Sons, Ltd Contents List of Figures List of Tables About the Author Acknowledgements Abbreviations Introduction xi xiii
Multi-site Datacenter Network Infrastructures
Multi-site Datacenter Network Infrastructures Petr Grygárek rek 1 Why Multisite Datacenters? Resiliency against large-scale site failures (geodiversity) Disaster recovery Easier handling of planned outages
CORPORATE NETWORKING
CORPORATE NETWORKING C. Pham Université de Pau et des Pays de l Adour Département Informatique http://www.univ-pau.fr/~cpham [email protected] Typical example of Ethernet local networks Mostly based
Internet Protocol: IP packet headers. vendredi 18 octobre 13
Internet Protocol: IP packet headers 1 IPv4 header V L TOS Total Length Identification F Frag TTL Proto Checksum Options Source address Destination address Data (payload) Padding V: Version (IPv4 ; IPv6)
Master Course Computer Networks IN2097
Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Master Course Computer Networks IN2097 Prof. Dr.-Ing. Georg Carle Christian Grothoff, Ph.D. Chair for
DCB for Network Virtualization Overlays. Rakesh Sharma, IBM Austin IEEE 802 Plenary, Nov 2013, Dallas, TX
DCB for Network Virtualization Overlays Rakesh Sharma, IBM Austin IEEE 802 Plenary, Nov 2013, Dallas, TX What is SDN? Stanford-Defined Networking Software-Defined Networking Sexy-Defined Networking Networking
Objectives. The Role of Redundancy in a Switched Network. Layer 2 Loops. Broadcast Storms. More problems with Layer 2 loops
ITE I Chapter 6 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Objectives Implement Spanning Tree Protocols LAN Switching and Wireless Chapter 5 Explain the role of redundancy in a converged
Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer
Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports
Technology and Trends in Network Virtualization
Technology and Trends in Network Virtualization Surasak Sanguanpong Department of Computer Engineering Faculty of Engineering, Kasetsart University Stairway to the top deck of sky server, F ukuoka Tech
Cisco ASR 9000 Series: Carrier Ethernet Architectures
Cisco ASR 9000 Series: Carrier Ethernet Architectures The initial phase of network migrations in the past several years was based on the consolidation of networks over the IP/Multiprotocol Label Switching
Introduction to TCP/IP
Introduction to TCP/IP Raj Jain The Ohio State University Columbus, OH 43210 Nayna Networks Milpitas, CA 95035 Email: [email protected] http://www.cis.ohio-state.edu/~jain/ 1 Overview! Internetworking Protocol
Data Center Interconnects. Tony Sue HP Storage SA David LeDrew - HPN
Data Center Interconnects Tony Sue HP Storage SA David LeDrew - HPN Gartner Data Center Networking Magic Quadrant 2014 HP continues to lead the established networking vendors with respect to SDN with its
Transport and Network Layer
Transport and Network Layer 1 Introduction Responsible for moving messages from end-to-end in a network Closely tied together TCP/IP: most commonly used protocol o Used in Internet o Compatible with a
The evolution of Data Center networking technologies
0 First International Conference on Data Compression, Communications and Processing The evolution of Data Center networking technologies Antonio Scarfò Maticmind SpA Naples, Italy [email protected]
VMware Virtual SAN 6.2 Network Design Guide
VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...
Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009
Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results May 1, 2009 Executive Summary Juniper Networks commissioned Network Test to assess interoperability between its EX4200 and EX8208
Multi Protocol Label Switching (MPLS) is a core networking technology that
MPLS and MPLS VPNs: Basics for Beginners Christopher Brandon Johnson Abstract Multi Protocol Label Switching (MPLS) is a core networking technology that operates essentially in between Layers 2 and 3 of
Service Definition. Internet Service. Introduction. Product Overview. Service Specification
Service Definition Introduction This Service Definition describes Nexium s from the customer s perspective. In this document the product is described in terms of an overview, service specification, service
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
Guide to TCP/IP, Third Edition. Chapter 3: Data Link and Network Layer TCP/IP Protocols
Guide to TCP/IP, Third Edition Chapter 3: Data Link and Network Layer TCP/IP Protocols Objectives Understand the role that data link protocols, such as SLIP and PPP, play for TCP/IP Distinguish among various
Link Layer. 5.6 Hubs and switches 5.7 PPP 5.8 Link Virtualization: ATM and MPLS
Link Layer 5.1 Introduction and services 5.2 Error detection and correction 5.3Multiple access protocols 5.4 Link-Layer Addressing 5.5 Ethernet 5.6 Hubs and switches 5.7 PPP 5.8 Link Virtualization: and
Ethernet. Ethernet. Network Devices
Ethernet Babak Kia Adjunct Professor Boston University College of Engineering ENG SC757 - Advanced Microprocessor Design Ethernet Ethernet is a term used to refer to a diverse set of frame based networking
Communication Systems Internetworking (Bridges & Co)
Communication Systems Internetworking (Bridges & Co) Prof. Dr.-Ing. Lars Wolf TU Braunschweig Institut für Betriebssysteme und Rechnerverbund Mühlenpfordtstraße 23, 38106 Braunschweig, Germany Email: [email protected]
RESILIENT NETWORK DESIGN
Matěj Grégr RESILIENT NETWORK DESIGN 1/36 2011 Brno University of Technology, Faculty of Information Technology, Matěj Grégr, [email protected] Campus Best Practices - Resilient network design Campus
Introduction to IP v6
IP v 1-3: defined and replaced Introduction to IP v6 IP v4 - current version; 20 years old IP v5 - streams protocol IP v6 - replacement for IP v4 During developments it was called IPng - Next Generation
CHAPTER 10 LAN REDUNDANCY. Scaling Networks
CHAPTER 10 LAN REDUNDANCY Scaling Networks CHAPTER 10 10.0 Introduction 10.1 Spanning Tree Concepts 10.2 Varieties of Spanning Tree Protocols 10.3 Spanning Tree Configuration 10.4 First-Hop Redundancy
ESSENTIALS. Understanding Ethernet Switches and Routers. April 2011 VOLUME 3 ISSUE 1 A TECHNICAL SUPPLEMENT TO CONTROL NETWORK
VOLUME 3 ISSUE 1 A TECHNICAL SUPPLEMENT TO CONTROL NETWORK Contemporary Control Systems, Inc. Understanding Ethernet Switches and Routers This extended article was based on a two-part article that was
MPLS VPN Services. PW, VPLS and BGP MPLS/IP VPNs
A Silicon Valley Insider MPLS VPN Services PW, VPLS and BGP MPLS/IP VPNs Technology White Paper Serge-Paul Carrasco Abstract Organizations have been demanding virtual private networks (VPNs) instead of
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
Bandwidth Profiles for Ethernet Services Ralph Santitoro
Ralph Santitoro Abstract This paper provides a comprehensive technical overview of bandwidth profiles for Ethernet services, based on the work of the Metro Ethernet Forum (MEF) Technical Committee. The
Course. Contact us at: Information 1/8. Introducing Cisco Data Center Networking No. Days: 4. Course Code
Information Price Course Code Free Course Introducing Cisco Data Center Networking No. Days: 4 No. Courses: 2 Introducing Cisco Data Center Technologies No. Days: 5 Contact us at: Telephone: 888-305-1251
Bandwidth Profiles for Ethernet Services Ralph Santitoro
Ralph Santitoro Abstract This paper provides a comprehensive technical overview of bandwidth profiles for Ethernet services, based on the work (as of October 2003) of the Metro Ethernet Forum (MEF) Technical
White Paper: Carrier Ethernet
White Paper: Carrier Ethernet Activity and Task: JRA1 T1 Target Audience: NREN technical networking specialists Document Code: Authors: J. Kloots (SURFnet), V. Olifer (JANET) Acknowledgement: The research
Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center
Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center [email protected] www.globalknowledge.net Planning for the Redeployment of
CS419: Computer Networks. Lecture 9: Mar 30, 2005 VPNs
: Computer Networks Lecture 9: Mar 30, 2005 VPNs VPN Taxonomy VPN Client Network Provider-based Customer-based Provider-based Customer-based Compulsory Voluntary L2 L3 Secure Non-secure ATM Frame Relay
Virtual networking technologies at the server-network edge
Virtual networking technologies at the server-network edge Technology brief Introduction... 2 Virtual Ethernet Bridges... 2 Software-based VEBs Virtual Switches... 2 Hardware VEBs SR-IOV enabled NICs...
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Scalability Analysis of Metro Ethernet
Scalability Analysis of Metro Ethernet Heikki Mahkonen Helsinki University of Technology Email: [email protected] Abstract This document gives a scalability analysis of Metro Ethernet architecture.
TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
Network Layer: Network Layer and IP Protocol
1 Network Layer: Network Layer and IP Protocol Required reading: Garcia 7.3.3, 8.1, 8.2.1 CSE 3213, Winter 2010 Instructor: N. Vlajic 2 1. Introduction 2. Router Architecture 3. Network Layer Protocols
Rohde & Schwarz R&S SITLine ETH VLAN Encryption Device Functionality & Performance Tests
Rohde & Schwarz R&S Encryption Device Functionality & Performance Tests Introduction Following to our test of the Rohde & Schwarz ETH encryption device in April 28 the European Advanced Networking Test
Using Network Virtualization to Scale Data Centers
Using Network Virtualization to Scale Data Centers Synopsys Santa Clara, CA USA November 2014 1 About Synopsys FY 2014 (Target) $2.055-2.065B* 9,225 Employees ~4,911 Masters / PhD Degrees ~2,248 Patents
Server and Storage Virtualization
Server and Storage Virtualization. Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 [email protected] These slides and audio/video recordings of this class lecture are at: 7-1 Overview
iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters
W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized
Virtual Network Exceleration OCe14000 Ethernet Network Adapters
WHI TE PA P ER Virtual Network Exceleration OCe14000 Ethernet Network Adapters High Performance Networking for Enterprise Virtualization and the Cloud Emulex OCe14000 Ethernet Network Adapters High Performance
Post-Class Quiz: Telecommunication & Network Security Domain
1. What type of network is more likely to include Frame Relay, Switched Multi-megabit Data Services (SMDS), and X.25? A. Local area network (LAN) B. Wide area network (WAN) C. Intranet D. Internet 2. Which
