Next Generation Data Center Networking. Intelligent Information Network. עמי בן-עמרם, יועץ להנדסת מערכות amib@cisco.comcom Cisco Israel. 1
Transparency in the Eye of the Beholder With virtualization, s have a transparent view of their resources 2
Transparency in the Eye of the Beholder but its difficult to monitor & apply network and storage policy back to virtual machines 3
Transparency in the Eye of the Beholder Scaling globally depends on maintaining transparency while also providing operational consistency 4
Why the Network is Changing 1. Desire for -level access-layer policy & monitoring i 2. Virtualization is driving higher link utilization 3. More demanding role of network (i.e. DRS) 4. Current approaches lead to inconsistent network policies 5
VN-Link Brings Level Granularity otion Problems: otion may move s across physical ports policy must follow Impossible to view or apply policy to locally switched traffic VLAN 101 Cannot correlate traffic on physical links from multiple s VN-Link: Extends network to the Consistent services Coordinated, coherent management Continuum of deployment options 6
What is VN-Link? VN-Link Link, or Virtual Network Link, is a term which describes a new set of features and capabilities that enable interfaces to be individually identified, configured, monitored, migrated and diagnosed. VNIC VNIC The term literally refers to a Hypervisor specific link that is created between the and Cisco switch. It is the logical equivalent & combination of a NIC, a Cisco switch interface VETH VETH and the RJ-45 patch cable that hooks them together. VN-Link requires platform support for Port Profiles, Virtual Ethernet Interfaces, Virtual Center Integration, and Virtual Ethernet mobility. 7
VN-Link With the Cisco Nexus 1000V Cisco Nexus 1000V Software Based Industry s first third-party ESX switch #1 Server #2 #3 #4 Built on Cisco NX-OS Compatible with switching platforms Maintain VirtualCenter provisioning model unmodified for server administration but also allow network administration of Nexus 1000V via familiar Cisco NX-OS CLI Nexus 1000V W ESX NIC Nexus 1000V LAN NIC Announced World 2008 Shipping 2Q09 Policy-Based Mobility of Network Non-Disruptive Connectivity and Security Properties Operational Model 8
VN-Link with the Nexus 5000 Nexus Switch with VN-Link Hardware Based Allows scalable hardware-based implementations through hardware switches Standards-based initiative: Cisco & ware proposal in IEEE 802 to specify Network Interface Virtualization Combines and physical network operations into one managed node #1 #2 Server W ESX VN-Link Future availability Nexus 5000 #3 #4 Policy-Based Mobility of Network Non-Disruptive Connectivity and Security Properties Operational Model 9
Future Cisco VN-Link Architecture #1 #2 Server 1 #3 #4 #5 #6 Server 2 #7 #8 Software Hypervisor Nexus 1000 VEM Flexible Connectivity SW or HW based options Unified Management N5K Blade Manage or Rack entire Optimized access layer Servers infrastructure Scalable 1 or 10G through Services connectivity N5K (virtual, Distributed blades, SW based FEX) feature Centralized velocity & Local Switching Configuration, policy, statistics managed Centralized centrally, HW exposed based to high server admin performance application Supports Services both Offload centralized CPU intensive (VNTag) & features local (N1K) handled switching in N5K HW (IV) Hypervisor VSM Nexus 5000 SW (IV) FEX (Blade or TOR) Scalab ble Serv vices Hardware 10
Cisco Virtualization-Centric Networking 1. Virtualization aware access layer 2. Policy-based network management 3. Large-scale virtual machine mobility 11
Network Scale Virtualization Data Center otion otion Data Center Service Provider Virtualize at Cluster Scale Virtualize at Network Scale Virtualize at Data Center Scale Data Center 12
Towards Cloud Computing Service Provider Standards based virtualization at a network scale Transparent interoperability between on- premise and off-premise computing Ex. VDI and DR Enterprise and service provider use cases Virtualize at Network Scale On Premise Data Center 13
and Blade Servers Optimized SAN 14
Switching Performance Virtual Machines () and Storage Networking Virtual Machines pose new requirements for SANs Support complex, unpredictable, dynamically changing traffic patterns Provide fabric scalability for higher workload Differentiate Quality of Service on a per basis Virtualized Servers Virtualized Servers Virtualized Servers Virtualized Servers Virtual Machines Deployment, Management, Security Create flexible and isolated SAN sections, support management Access Control Support performance monitoring, trending, and capacity planning up to each Allow mobility without compromising it Storage Array Storage Array Fabric security Tier 1 Tier 2 Tier 3 15
Virtual Machines Transparent MDS 9000 SAN Switching Infrastructure to support growing g Bandwidth Flexibility, Performance, Density and Security 8 Gbps Fibre Channel Investment Protection VN-Link Storage Services for Optimized SAN Per Unique HBA association (NPIV) Per Quality-of-Service Per Security, Performance Monitoring and Management belongs to different VSAN ( F-port trunking) Blade Server Optimized SAN Network Port Virtualizer (NPV) Flex Attach F-port Port Channel, F-Port Trunking 16
High-Performance MDS 9000 Family Switching Architecture Crossbar and arbiter architecture designed to provide the best performance in the most difficult traffic conditions Virtual Output Queues (VOQs) eliminate head-of-line blocking Even and predictable throughput and latency for many-to-one and many-to-few traffic conditions 100% wirespeed for both large and small frames Fair load-balancing for both large and small frames external interface s VOQs Crossbar switch fabric Crossbar switch fabric external interface s Centralized Crossbar switch architecture 17
QoS for Individual Virtual Machines Zone-Based QoS: -1 has Priority; -2 and any Additional Traffic has Lower Priority -1 Reports Better Performances than -2 tual hines -1-2 Congested Link Cisco MDS 9124 Cisco MDS 9222i Multilayer Fabric Switch Multilayer Fabric Switch Virt Mac Storage Array FC Hyper rvisor pwwn-v2 Low Priority QoS QoS IVR FC pwwn-t pwwn-v1 High Priority HW pwwn-p FC Low-Priority Traffic 18
MDS 9000 Family Virtual SANs (VSANs) Hardware-based isolation of tagged traffic belonging to different Fibre Channel Services for VSANs Blue VSAN VSAN header Fibre Channel removed at egress Traffic tagged at Fx_Port ingress and Services for point Red VSAN carried across links between switches Any switch interface in the fabric can be placed in any VSAN Enhanced ISL (EISL) Trunk carries tagged Independent instance of Fibre traffic from Multiple VSANs Channel services for each newly created VSAN Zone server, name server, management Fibre Channel VSAN Header server, principle switch election, etc. Each service runs independently and is managed/configured independently Iadded at ingress point based on port membership Services for Blue VSAN Fibre Channel Services for Red VSAN 19
Fully Extending Fabric Virtualization to Virtual Machines NPIV allows each virtual machine () to be associated to a unique virtual HBA s register independently via unique 3 PWWN and obtain unique FCID Virtual Standard-based (ANSI T11) Machi nes Separate fabric login by each enables level: Zoning Security Traffic mgmt Combined with F-Port Trunking, each can now belong to a different VSAN Single physical FC link carrying multiple VSAN ERP Physical Server E-Mail Web 20
F-Port Trunking Extend VSAN tagging to the N_Port to F_Port connection Hardware-based isolation of tagged traffic belonging to different VSANs up to Servers or Storage Devices Non VSAN- Trunking capable end node VSAN Header removed at egress point Fibre Channel Services for Blue VSAN Fibre Channel Services for Red VSAN VSAN-trunking-enabled drivers required for end nodes (for example, Hosts) Traffic tagged in Host depending on the VSAN-trunking support required by end nodes Enhanced ISL (EISL) Trunk carries tagged traffic from multiple VSANs Trunking E_Port Trunking E_Port Fibre Channel Services for Blue VSAN VSAN Hdader added by the HBA driver indicating Virtual Machine membership Trunking F_Port Fibre Channel Services for Red VSAN 21
N-Port Virtualizer (NPV) Blade Switch Deployment Model NPV Mode Blade System Blade System Blade N Blade 2 Blade 1 Blade N Blade 2 Blade 1 NPV NPV SAN Storage Blade Switch is configured in NPV mode (i.e. HBA mode) Enabling Large-Scale Blade Server Deployments NPV simplifies deployment and management of large scale Blade Server environments Reduces number of Domain IDs Minimizes interoperability issues with core SAN switches Minimizes coordination between Server and SAN administrators NPV converts a Blade Switch operating as FC Switch to a FC HBA NPV is available on the following platforms IBM and HP Blade Switches MDS 9124 & 9134 Fabric Switches 22
Using Virtual Machines in Blade Servers with NPIV and Cisco MDS Blade Switch Series rver Blades Se 01 02 03 04 05 06 07 08 09 FC FC FC FC 10 11 12 The individual id blade server can use NPIV to provide the virtual servers with virtual HBAs 3 Virtu ual N-Ports 3 Virtua al N-Ports 3 Virtua al N-Ports 3 Virtua al N-Ports Cisco MDS Blade Switch Series Cisco MDS 9000 Family Core Switch Disk Array (12 LUNs That May be Mapped Individually) FC Nested NPIV (12 Virtual N-Ports) 23
FlexAttach No Blade Switch Config Change No Switch Zoning Change No Array Configurati on Change Blade 1 Blade Server New Blade NPV. SAN Blade N Storage Flex Attach Flexibility for Adds, Moves, and Changes FlexAttach (Based on WWN NAT) Each Blade Switch F-Port assigned a virtual WWN Blade Switch performs NAT operations on real WWN of attached server Benefits No SAN re-configuration required when new Blade Server attaches to Blade Switch port Provides flexibility for server administrator, by eliminating need for coordinating change management with networking team Reduces downtime when replacing failed Blade Servers 24
Enhanced Blade Switch Resiliency F-Port Port Channel F-Port PortChannels Blade Sys stem Blade N Blade 2 Blade 1 N-Port F-Port Port Channel F-Port Core Director SAN Storage Bundle multiple ports in to 1 logical link Any port, any module High-Availability (HA) Blade Servers are transparent if a cable, port, or line cards fails Traffic Management Higher aggregate bandwidth Hardware-based load balancing F-Port Trunking F-Port Trunking for Blade Switch Blade System F-Port Trunking Core Director Blade N VSAN 1 Blade 2 Blade 1 VSAN 2 VSAN 3 SAN Storage Partition F-Port to carry traffic for multiple VSANs Extend VSAN benefits to Blade Servers Separate management domains N-Port F-Port Separate fault isolation domains Differentiated services: QoS, Security 25
26