How To Virtualize A Network On A Network With A Virtualization Network On An Ipad Or Ipad (For A Powerbook) On A Virtualized Network On Your Computer Or Ipa (For An Ipa) On Your Network On The
|
|
|
- Emily Walker
- 5 years ago
- Views:
Transcription
1 Data Center Virtualization Dr. Peter J. Welcher, Chesapeake Netcraftsmen Cisco Mid-Atlantic User s Group Columbia MD 4/27/10 Washington DC 4/29/10 Slides Copyright 2009, Cisco, used with permission (and thanks). Slides added by Netcraftsmen are identified. 1 CNC content About the Speaker Dr. Pete Welcher Cisco CCIE #1773, CCSI #94014, CCIP, CCDE written Specialties: Network Design, Data Center, QoS, MPLS, Wireless, Large-Scale Routing & Switching, High Availability Customers include large enterprises, federal agencies, hospitals, universities, cell phone provider Taught many of the Cisco router / switch courses, developed some, including revisions to DESGN and ARCH courses Reviewer for many Cisco Press books and book proposals Presented lab session on MPLS VPN Configuration at Networkers 2005, 2006, 2007, BGP in 2008 and 2009, CCIP: Data Center Design in 2009 Over 27 blogs, 140 articles, prior seminars, posted 2 CNC content Handout Page-1
2 Objectives In this presentation I hope to: Look at why virtualization is needed and useful Look at various types of virtualization with a Data Center focus Discuss some design examples to share ideas on how virtualization might help in your network Understand the benefits of vmware ESX and Cisco 1000v (and their network impact) Consequently: The topic coverage will be broad not too deep WAY too many slides, too little time will present some slides quickly 3 CNC content Agenda Virtualization: Getting Motivated! Compute Resource Virtualization Network Virtualization Virtualization with VSS Virtualization with Nexus Adding Services Storage Virtualization Data Center Interconnect Conclusion 4 Handout Page-2
3 Virtualization: One Definition Virtualization is the pooling and abstraction of resources and services in a way that masks the physical nature and boundaries of those resources and services from their users 5 What s Virtualization? One as many Single physical device acting as multiple virtual devices E.g. contexts (ASA contexts, Nexus 7K vdc s, ) VMWare and servers as VM s VLANs, VRFs segmenting single links and/or routers Many as one Clustering / stacking, whereby multiple physical boxes become logically one virtual box Example: 6500 VSS, Nexus vpc Emulation Example: pseudo-wires (EoMPLS, etc.) 6 CNC content Handout Page-3
4 Why Virtualize? Servers One app, one box (Seriously underused hardware) x (many boxes) One app per blade continues that trend Death by (small) boxes (servers, network) Device count drives up Operations costs Underused boxes cost: Procurement system costs, purchase price, vendor support, admin, space, power, cabling, operations support,. However, separate boxes are sometimes used to reduce complexity Everything in one (two) chassis means you have to be careful with those chassis Compromise: LOGICALLY separate boxes, or virtualization 7 CNC content Four Drivers Behind Virtualization Operational Flexibility 8 Handout Page-4
5 But It s Not Just Servers! Clutter of many project-specific Server Load Balancers MS or Linux Load Balancing, various vendor appliances, now virtualized SLB appliances, Cisco CSM s, Firewalls proliferating Firewall contexts Replication of environments Dev, Test, Prod: similar, sometimes hand-me-down hardware Can use separate contexts instead 9 CNC content Other Significant Benefits Virtualization addresses several key aspects: Ability to quickly spawn test and development environments Provides failover capabilities to applications that can t do it natively Maximizes utilization of resources (compute & I/O capacity) Server portability (migrate a server from one host to the other) Virtualization is not limited to servers and OS Network Storage Application Desktop 10 Handout Page-5
6 Data Center Building Blocks Applications Application Networking Services Application Delivery and Application Optimization Virtualization Network, Server, Storage and Management Transport Infrastructure Eth, FC, DCE, WAN, MAN Compute Infrastructure OS, Hardware, Firmware Storage Infrastructure SAN, NAS, DAS 11 Virtualization Is Not Limited to OS/Server Network Virtualization Segmentation and security Higher resource flexibility Improved capacity utilization Server Virtualization Consolidation of physical servers Virtual Machine mobility Rapid application deployment with VMs Storage Virtualization Segmentation and security Improved data mgmt. & compliance Non-disruptive provisioning & migration Users IP / MPLS Network VPNs Virtualized Virtualization Services (FW, LB etc) VLANs Virtual I/O Virtual Machines App Server Virtualization OS Server Pool App App App App OS OS OS OS Hypervisor Physical Server VSANs Virtual Volumes Storage Storage Virtualization Fabric Storage Physical Pool Volumes 12 Handout Page-6
7 Impact of VMWare Right now, server virtualization is driving a lot of change VMWare gives the ability to change server sizing and storage sizing issues without disruption vmotion gives the ability to take physical chassis out of service without service disruption Not to mention load-shifting, high availability for VM s, etc. Some costs are: Data center infrastructure designs changing rapidly Need to manage VM proliferation, use of shared resources (CPU, RAM, SAN) Coming next: Data center virtualization (clouds) Modulo addressing cloud security considerations Per-application infrastructure virtualization 13 CNC content Some Observations Hidden lesson (to me): automation requires NOT hand-crafting solutions Needed: system + network + SAN architecture (or a small set of architectures) Think: application or service required components description (along with how they fit together) Stop doing one-offs Do a small number of variations of hardware environments supporting software environments Racking, cabling costs (and labor time) are getting too expensive Avoid them via virtualization Use less cabling (10+ G links, FCoE) 14 CNC content Handout Page-7
8 Agenda Virtualization: Getting Motivated! Compute Resource Virtualization Network Virtualization Virtualization with VSS Virtualization with Nexus Adding Services Storage Virtualization Data Center Interconnect Conclusion 15 Going from Here Evolution of Virtualization App App App App App App App App X86 Windows XP X86 Windows 2003 X86 Suse X86 Red Hat 12% Hardware Utilization 15% Hardware Utilization 18% Hardware Utilization 10% Hardware Utilization 16 Handout Page-8
9 to There App. A App. B App. C App. D X86 Windows XP X86 Windows 2003 X86 Suse Linux X86 Red Hat Linux X86 Multi-Core, Multi Processor 70% Hardware Utilization Guest OS Host OS Virtual machine monitor 17 Native/Full Virtualization (Type-1) VMM runs on bare metal VMM virtualizes (emulates) hardware Virtualizes x86 ISA, for example Guest OS unmodified VMs: Guest OS+Applications run under the control of VMM Examples VMware ESX Server, Microsoft Hyper- V IBM z/vm Linux KVM (Kernel VM) 18 Handout Page-9
10 What to Virtualize Ideally all components CPU Privileged Instructions Sensitive Instructions Memory I/O Network Block/Disk Interrupt 19 A Closer Look at VMware s ESX Full virtualization Runs on bare metal Referred to as Type-1 Hypervisor ESX is the OS (and of course the VMM) ESX has Linux scripting / shell capabilities ESXi does not smaller, less «attack surface» ESX handles privileged executions from Guest kernels Emulates hardware when appropriate Uses Trap and Emulate and Binary Translation Guest OS run as if it were business as usual Except they really run in user mode (including their kernels) 20 Handout Page-10
11 What About Networking? Users naturally expect VMs to have access to network VMs don t directly control networking hardware Physical NIC is usually shared between multiple VMs When a VM communicates with the outside world, it: passes the packet to its local device driver which in turns hands it to the virtual I/O stack which in turns passes it to the physical NIC ESX gives VMs several device driver options: Strict emulation of Intel s e1000 Strict emulation of AMD s PCnet 32 Lance VMware vmxnet: paravirtualized! VMs have MAC addresses that appear on the wire 21 Virtual Adapters and Virtual Switches VM-to-VM and VM to native host traffic handled via software switch that lives inside ESX VM-to-VM: memory transfer VM-to-native: physical adapter Note: speed and duplex are irrelevant with virtual adapters 22 Handout Page-11
12 What Does This Mean for the LAN Admin? To the LAN administrator, the picture is blurry LAN role typically limited to provisioning a trunk to ESX No visibility into VM-to- VM traffic Troubleshooting performance or connectivity issues challenging 23 VLAN 101 Solution: Cisco s Virtual Switch (Nexus 1000-V) VMotion Problems: VMotion may move VMs across physical ports policy must follow Impossible to view or apply policy to locally switched traffic Cannot correlate traffic on physical links from multiple VMs Nexus 1000-V: Extends network to the VM Consistent services Coordinated, coherent management 24 Handout Page-12
13 Virtual Networking with Cisco s Nexus 1000-V Boundary of network visibility Nexus 1000V provide visibility down to the individual VMs Policy can be configured per-vm Policy can move around within the ESX cluster Nexus 1000V Distributed Virtual Switch Cisco NX-OS Command Line Interface! 25 Nexus 1000V Key Features Includes Key Cisco Network & Security features Addressing Issues for: VM Isolation Separation of Duties VM Visibility 26 Handout Page-13
14 Separation of Duties: Network and Server Teams Port Profiles A network feature macro Example: Features are configured under a port profile once and can be inherited by access ports Familiar IOS look and feel for network teams to configure virtual infrastructure port-profile vm180 vmware port-group pg180 switchport mode access switchport access vlan 180 ip flow monitor ESE-flow input ip flow monitor ESE-flow output no shutdown state enabled interface Vethernet9 inherit port-profile vm180 interface Vethernet10 inherit port-profile vm180 Promiscuous Port Separation of Duties: Network & Server Teams Nexus 1000V automatically enables port groups in Virtual Center via API Server Admin uses Virtual Center to assign vnic policy from available port groups Nexus 1000V automatically enables VM connectivity at VM power-on Workflow remains unchanged 28 Handout Page-14
15 Virtual Access Layer Nexus 1000v N7k1-VDC2 N7k2-VDC2 Po71 Po72 DC DC VSS-ACC Po151 Trunking Uplinks Nexus 1k VSM vswitch VEMs Po1 Po2 Po3 APC w/ src-mac hash ESX ESX4 Lessons Learned Data Center moves If you don t have 1000v or comparable information, replacing & troubleshooting get interesting You need to know EXACTLY which blade NIC is cabled to which switch port The switch config is, in effect, your documentation Impossible to technically verify active / passive vnics Alternative: extensive server admin discussions and followup To upgrade such a switch, map old port to new port, replicate features, cross your fingers Your visibility and control is really per blade server, not per- VM 30 CNC content Handout Page-15
16 Some Other Thoughts VMotion requires SAN In some form (iscsi, NFS, FC, etc.) Claimed that well-designed iscsi and NFS can give performance comparable to FC Except perhaps for high-end servers with high IO rates Tiered SAN expected Less costly approaches where suitable FC / high performance arrays, etc. where needed VMotion for Storage requires SAN Provides flexible re-allocation of disk resources Non-disruptive if done properly 31 CNC content Agenda Virtualization: Getting Motivated! Compute Resource Virtualization Network Virtualization Virtualization with VSS Virtualization with Nexus Adding Services Storage Virtualization Data Center Interconnect Conclusion 32 Handout Page-16
17 What Is Network Virtualization? Overlay of physical topologies (N:1) N physical networks maps to 1 physical network Security Network Guest / Partner Network Backup Network Out-of-band management Network Consolidated Network 33 Network Virtualization Classification Generally speaking, four areas in network virtualization Control-plane virtualization Data-plane virtualization Management plane virtualization Device pooling and clustering 34 Handout Page-17
18 Data Plane Virtualization Simple example: Virtual LANs 802.1Q: 12 bits up to 4096 VLANs on same physical cable VLAN trunk 35 Another Data Plane Virtualization Example The VRF: Virtual Routing and Forwarding instance VLAN Trunk, physical interfaces, tunnels, etc. VRF 1 VRF 2 Logical or Physical Int (Layer 3) VRF 3 Logical or Physical Int (Layer 3) 36 Handout Page-18
19 Control-Plane Virtualization for VLANs Example: Spanning-tree protocol Loop-breaker in Ethernet topologies How is it virtualized? Per-VLAN spanning-tree What s in it for me? Allows multiple logical topologies to exist on top of one physical topology using the Good Old odd/even VLAN balancing scheme A B C 37 Control-Plane Virtualization for VRFs Example: per VRF routing protocol One VRF could run OSPF while another runs EIGRP Goal Isolation of routing and forwarding tables Allows overlapping IP addresses between VRFs /30 VRF 1 [OSPF] / / /30 VRF 2 [EIGRP] 38 Handout Page-19
20 Intersection of VLANs and VRFs Si VLAN Trunks VLAN 20 Data VLAN 120 Voice VLAN 21 Red VLAN 22 Green VLAN 23 Blue Intranet L3 VRF Red VRF Green VRF Blue Si VLAN Trunks VLAN 30 Data VLAN 130 Voice VLAN 31 Red VLAN 32 Green VLAN 33 Blue It is easy to map VLANs to VRFs at the distribution layer Provides safe and easy way to isolate logical networks No uncontrolled leaking from one to the other Maximizes use of physical infrastructure 39 ASA / FWSM: Device Partitioning Example: Firewall Services Module virtual contexts Virtualization of data/control/management planes 40 Handout Page-20
21 FWSM Example: Device Partitioning Mix of control, data and management plane virtualization techniques The changeto command: switch from one context to the other, similar to running multiple terminal session on a Linux system Not much in common with OS/Server virtualization No isolation between contexts, no VMM, single OS image Not even a one-to-one mapping between a process and a context Virtualization here is essentially a classification problem Inbound interface, destination MAC address These two values allow data plane to assign traffic to right context Concept of virtual interface throughout the packet processing chain CNC Homework: how does the ASA differ? 41 CNC-modified content Another Example: Nexus 7000 Nexus 7000 runs Cisco s NXOS Very different internal architecture compared to classic IOS NXOS is a true multiprogramming OS Features a Linux kernel and user-space processes Most features (BGP, HSRP, EIGRP, etc.) are individual processes Direct benefit: fault isolation, process restartability 42 Handout Page-21
22 Nexus 7000 s Virtual Device Contexts OS and hardware architecture allow robust virtualization implementation VDC concept: up to 4 individual partitions Concept of switchto/switchback and per-vdc access/isolation Somewhat like host-based virtualization 43 Process ABC VDC A Process DEF NX-OS Virtual Device Contexts VDC Fault Domain Protocol Stack Process XYZ VDCA Infrastructure Linux 2.6 Kernel Physical Switch Process ABC VDC B Process DEF Protocol Stack Process XYZ VDCB A VDC builds a Fault Domain around all running processes within that VDC faults within a running process or entire VDC are isolated from other device contexts Fault Domain Process DEF in VDC B crashes Processes in VDC A are not affected and will continue to run unimpeded This is a function of the process modularity of the OS and a VDC specific IPC context 44 Handout Page-22
23 VDC A VDC A Layer-2 Protocols VLAN mgr STP IGMP sn. LACP UDLD CDP 802.1X CTS Virtual Device Contexts (VDCs) VDC A B n VDC VDC B Layer-3 Protocols Layer-2 Protocols OSPF GLBP VLAN mgr UDLD BGP HSRP STP CDP EIGRP VRRP IGMP sn X PIM SNMP LACP CTS Layer-3 Protocols OSPF BGP EIGRP PIM GLBP HSRP VRRP SNMP RIB RIB RIB RIB Protocol Stack (IPv4 / IPv6 / L2) Protocol Stack (IPv4 / IPv6 / L2) VDC Virtual Device Context Flexible separation/distribution of Software Components Flexible separation/distribution of Hardware Resources Securely delineated Administrative Contexts Infrastructure Kernel VDCs are not The ability to run different OS levels on the same box at the same time Similar to host-based OS virtualization: single hypervisor handles all h/w resources 45 Device Pooling and/or Clustering Catalyst 6500 s Virtual Switch System (VSS) Nexus 7000 s Virtual Port Channel (vpc) It s really clustering Clever packet classification Standard Port Channel on Downstream Switches Two switches appear to be a single switch to outside world 46 Handout Page-23
24 Agenda Virtualization: Getting Motivated! Compute Resource Virtualization Network Virtualization Virtualization with VSS Virtualization with Nexus Adding Services Storage Virtualization Data Center Interconnect Conclusion 47 Current Network Challenges Data Center Traditional Data Center designs are requiring ever increasing Layer 2 adjacencies between server nodes due to prevalence of virtualization technology. However, they are pushing the limits of Layer 2 networks, placing more burden on loop-detection protocols such as Spanning Tree FHRP, HSRP, VRRP Spanning Tree Policy Management L2/L3 Core Single active uplink per VLAN (PVST), L2 reconvergence, excessive BPDUs L2 Distribution Dual-Homed Servers to single switch, Single active uplink per VLAN (PVST), L2 reconvergence L2 Access 48 Handout Page-24
25 Catalyst 6500 Virtual Switching System 1440 Overview Today (Today) VSS (Physical View) VSS (Logical View) 10GE 10GE Si Si Si Si 802.3ad or PagP 802.3ad 802.3ad or PagP 802.3ad Access Switch or ToR or Blades Server Access Switch or ToR or Blades Server Access Switch or ToR or Blades Server Simplifies operational Manageability via Single point of Management, Elimination of STP, FHRP etc Doubles bandwidth utilization with Active-Active Multi-Chassis Etherchannel (802.3ad/PagP) Reduce Latency Minimizes traffic disruption from switch or uplink failure with Deterministic subsecond Stateful and Graceful Recovery 49 (SSO/NSF) Introduction to Virtual Switching System Concepts 50 Handout Page-25
26 Virtual Switching System Data Center A Virtual Switching System-enabled Data Center allows for maximum scalability so bandwidth can be added when required, but still providing a larger Layer 2 hierarchical architecture free of reliance on Spanning Tree Single router node, Fast L2 convergence, Scalable architecture L2/L3 Core Dual Active Uplinks, Fast L2 convergence, minimized L2 Control Plane, Scalable L2 Distribution Dual-Homed Servers, Single active uplink per VLAN (PVST), Fast L2 convergence L2 Access 51 Virtual Switching System Architecture Multichassis EtherChannel (MEC) Prior to Virtual Switching System, Etherchannels were restricted to reside within the same physical switch. In a Virtual Switching environment, the 2 physical switches form a single logical network entity - therefore Etherchannels can now also be extended across the 2 physical chassis Standalone VSS Both LACP and PAGP Etherchannel protocols and Manual ON modes are supported Regular Etherchannel on single chassis Multichassis EtherChannel across 2 VSS-enabled chassis 52 Handout Page-26
27 Overview VMWare ESX Virtual Networking Virtual Machines... Service Console VMKernel Virtual Nics (vnics) L2 Virtual Switch (vswitch) Virtual Switch Virtual Switch Physical Nics (vmnics) Physical Switches Si Si 53 VM Based NIC Teaming Virtual Port-ID Based or Virtual MAC-Based Advantages Switch Redundancy Si Si Disadvantages Unequal traffic distribution possible VM bandwidth limited to mapped physical NIC capacity VMotion/IP Storage limited to 1 Physical NIC bandwidth capacity Service Console VMKernel 54 Handout Page-27
28 VM Based NIC Teaming IP Hash NIC Teaming Si Si Advantages Better Bandwidth availability for VM and Service/VMotion/IP Storage Traffic 802.3ad 802.3ad Disadvantages No Switch Redundancy Service Console VMKernel 55 VM Based NIC Teaming NIC Teaming Across VSS Catalyst Switches Si 802.3ad VSL Service Console Si 802.3ad VMKernel Maximum Bandwidth for VM and Service/VMotion/IP Storage Traffic with Granular Load Balancing Increased Availability with Link Aggregation Across Two Separate Physical Catalyst 6500 Simpler configuration on Catalyst Switch Maintains separation between VM traffic and Service/VMotion/IP Storage traffic Allows scaling VM traffic and Service/VMotion/IP Storage to all available NICs 56 Handout Page-28
29 Agenda Virtualization: Getting Motivated! Compute Resource Virtualization Network Virtualization Virtualization with VSS Virtualization with Nexus Adding Services Storage Virtualization Data Center Interconnect Conclusion 57 Building the Access Layer using Virtualized Switching Virtual Access Layer Still a single logical tier of layer-2 switching Common control plane with virtual hardware and software based I/O modules Cisco Nexus 2000 Data Center Core Aggregation Layer 3 Links Layer 2 Trunks Switching fabric extender module Acts as a virtual I/O module supervised by Nexus 5000 Nexus 1000v Software-based Virtual Distributed Switch for server virtualization environments. Access Virtual- Access CBS 3100 Nexus 2000 VMs Nexus 1000v 58 Handout Page-29
30 Migration to a Unified Fabric at the Access Supporting Data and Storage Nexus 5000 Series switches support integration of both IP data and Fibre Channel over Ethernet at the network edge FCoE traffic may be broken out on native Fibre Channel interfaces from the Nexus 5000 to connect to the Storage Area Network (SAN) Servers require Converged Network Adapters (CNAs) to consolidate this communication over one interface, saving on cabling and power LAN SAN IP Data and Storage Aggregation Server Access Ethernet Fibre Channel Ethernet plus FCoE 59 Cisco Unified Computing System (UCS) A cohesive system including a virtualized layer-2 access layer supporting unified fabric with central management and provisioning Optimized for greater flexibility and ease of rapid server deployment in a server virtualization environment From a topology perspective, similar to the Nexus 5000 and 2000 series LAN SAN IP Data Aggregation Dual SAN Fabrics Ethernet Fibre Channel UCS FEX Uplinks Unified Computing Virtual Access UCS 6100 Series Fabric Interconnects UCS 5100 Enclosure UCS B-Series Servers UCS 2100 Fabric Extenders UCS I/O Adapters 60 Handout Page-30
31 Virtual Device Context Example: Multiple Aggregation Blocks Single physical pair of aggregation switches used with multiple VDCs Access switches dual-homed into one of the aggregation VDC pairs Aggregation blocks only communicate through the core layer Design considerations: Ensure control plane requirements of multiple VDCs do not overload Supervisor or I/O Modules Where possible consider dedicating complete I/O Modules to one VDC (CoPP in hardware per-module) Ports or port-groups may be moved between aggregation blocks (DC pods) without requiring re-cabling Enterprise Network Data Center Core Multiple Aggregation VDCs Access 61 Agenda Virtualization: Getting Motivated! Compute Resource Virtualization Network Virtualization Virtualization with VSS Virtualization with Nexus Adding Services Storage Virtualization Data Center Interconnect Conclusion 62 Handout Page-31
32 Virtual Device Context Example: Services VDC Sandwich Multiple VDCs used to sandwich services between switching layers Allows services to remain transparent (layer-2) with routing provided by VDCs May be leveraged to support both services chassis and appliances Design considerations: Access switches requiring services are connected to sub-aggregation VDC Access switches not requiring services may be connected to aggregation VDC Allows firewall implementations not to share interfaces for ingress and egress Facilitates virtualized services by using multiple VRF instances in the sub-aggregation VDC Enterprise Network Core Aggregation VDC 6500 Services Chassis Sub-Aggregation VDC Access 63 Using Virtualization and Service Insertion to Build Logical Topologies Logical topology example using services VDC sandwich physical model Layer-2 only services chassis with transparent service contexts VLANs above, below, and between service modules are a single IP subnet Sub-aggregation VDC is a layer-3 hop running HSRP providing default gateway to server farm subnets Multiple server farm VLANS can be served by a single set of VLANs through the services modules Traffic between server VLANs does not need to transit services device, but may be directed through services using virtualization Data Center Core Aggregation VDC Services Sub-Aggregation VDC Access Enterprise Network VLAN 161 VLANs 171,172 VLAN 162 VLAN 170 VLAN 163 VLAN 180 Transparent FWSM Context Transparent ACE Context Client-Server Flow Web Server Farm 64 Handout Page-32
33 Using Virtualization and Service Insertion to Build Logical Topologies Logical Topology to support multi-tier application traffic flow Same physical VDC services chassis sandwich model Addition of multiple virtual contexts to the transparent services modules Addition of VRF routing instances within the subaggregation VDC Service module contexts and VRFs are linked together by VLANs to form logical traffic paths Example Web/App server farm and Database server cluster homed to separate VRFs to direct traffic through the services Data Center Core Aggregation VDC Services Sub-Agg VDC Access FT VLANs VLAN 162 FT VLAN VLAN 163 VLAN 180 VLAN 161 Web/App Server Farm Enterprise Network Transparent FWSM Contexts Transparent ACE Contexts VRF Instances VLAN 151 FT VLANs VLAN 152 FT VLAN VLAN 153 VLAN 181 DB Server Cluster 65 Service Pattern Active-Active: Client-to-Server SVI-161 hsrp.1 SVI ASA1 161 SVI-151 Po99 SVI hsrp ASA2 ASA ASA2 ACE1 SS hsrp.7 N7k1-VDC2 vrf1 vrf2 Po99 hsrp.7 vrf2 N7k2-VDC2 vrf ACE2 SS2 IPS1 163, ,164 Server Farm 66 IPS2 Handout Page-33
34 Agenda Virtualization: Getting Motivated! Compute Resource Virtualization Network Virtualization Virtualization with VSS Virtualization with Nexus Adding Services Storage Virtualization Data Center Interconnect Conclusion 67 Storage Virtualization: Terminology? Storage virtualization englobes various concepts Definitions may vary based on your interlocutor For some, storage virtualization starts at virtual volumes For others, it starts with Virtual SANs Example: unified I/O Storage virtualization, network virtualization, both? First things first: the basics VSANs, FlexAttach, NPIV, NPV, Unified I/O, Virtual Volumes 68 Handout Page-34
35 Just Like There Are VLANs, There Are VSANs SAN islands Duplication of hardware resources Just-in-time provisioning VSAN: consolidation of SANs on one physical infrastructure Much like VLANs, VSAN traffic carries a tag Department A SAN Islands Department B Department C Virtual SANs (VSANs) Department A Department B Department C 69 VSAN Tagging Two Primary Functions Hardware-based isolation of tagged traffic belonging to different VSANs No special drivers or configuration required for end nodes (hosts, disks, etc.) Traffic tagged at Fx_Port ingress and carried across EISL (enhanced ISL) links between switches Create independent instance of Fibre Channel services for each newly created VSAN services include: Zone server, name server, management server, principle switch election, etc. Each service runs independently and is managed/configured independently VSAN Header Is Removed at Egress Point Enhanced ISL (EISL) Trunk Carries Tagged Traffic from Multiple VSANs VSAN Header Is Added at Ingress Point Indicating Membership No clue about VSANs Trunking E_Port (TE_Port) Trunking E_Port (TE_Port) Fibre Channel Services for Blue VSAN Fibre Channel Services for Red VSAN Fibre Channel Services for Blue VSAN Fibre Channel Services for Red VSAN 70 Handout Page-35
36 WWN Virtualization: FlexAttach Blade 1 No Blade Switch Config Change No Switch Zoning Change No Array Configuration Change Blade Server New Blade NPV. SAN Blade N Storage Flex Attach HBAs have World-Wide-Names They re burnt-in like MAC addresses FlexAttach assigns a WWN to a port Each F-Port is assigned a virtual WWN Burnt-in WWN is NAT d to virtual WWN Benefits Same WWN on a given port Control over WWN assignment Replacing failed HBA or host simple 71 SAN Device Virtualization Allows provisioning with virtualized servers and storage devices Significantly reduces time to replace HBAs and Storage devices No reconfiguration of zoning, VSANs, etc. required on MDS No need to reconfigure storage array LUN masking after replacing HBAs Eliminates re-building driver files on AIX and HP-UX after replacing storage Server Virtual Initiator Storage Arrays X Physical to Virtual Mapping Virtual Target Presents virtual WWN to servers and storage device Y 72 Handout Page-36
37 VM-Unaware Storage Traditional scenario: 3 VMs on ESX, one physical HBA VM1 VM2 N F FC Switch VM3 Regular HBA Storage LUNs VMs don t have WWNs. Only physical HBA does. No VM-awareness inside SAN fabric No VM-based LUN masking for instance 73 VM-Aware Storage: NPIV NPIV stands for N_Port ID Virtualizer VM1 pwwn1 NP VM2 FC Switch VM3 pwwn2 NPIV-aware HBA Storage LUNs pwwn3 Now each VM has its own port WWN Fabric sees those WWNs VM-aware zones or LUN masking 74 Handout Page-37
38 Domain ID Explosion Blade servers: domain ID explosion! Each FC switch inside blade servers use single domain ID Theoretical maximum number of Domain IDs is 239 per VSAN Supported number of domains is quite smaller: EMC: 40 domains Cisco Tested: 75 HP: 40 domains Manageability Lots of switches to manage Possible domain-id overlap Possible FSPF reconfiguration Domain-id 0x0A Domain-id 0x0B 0x0C 0x0D 0x0E 0x0F 75 Solution: N-Port Virtualizer (NPV) What is NPV? NPV enables the switch to act as a proxy for connected hosts Switch in NPV mode is no longer a switch NPV switch does not use a Domain ID Inherits Domain ID from upstream fabric switch No longer limited to Domain ID boundaries Manageability Far less switches to manage NPV very much plug and play NPV-enabled switch is now managed like a NPIV enabled host Eliminates the need for server administrators to manage the SAN 76 Handout Page-38
39 N-Port Virtualization (NPV): An Overview NPV topology Switch inside blade server is NPV Reduces domain IDs Blade server switches simpler to configure NP Domain ID 0A VSAN 5 F VSAN 5 NPIV aware FC 0A.1.1 FC Switch FC Switch NPV-aware switches Inherit domain ID from Core Switch No name server, no zones, no FSPF, etc. All FCIDs 0A Differences Between NPIV and NPV NPIV (N-Port ID Virtualization) Functionality geared towards server s host bus adapters (HBA) NPIV provides a means to assign multiple Server Logins to a single physical interface The use of different virtual pwwn allows access control (zoning) and port security to be implemented at the application level Usage applies to applications such as VMWare, MS Virtual Server and Linux Xen NPV (N-Port Virtualizer) Functionality geared towards MDS fabric switches (MDS 9124, MDS 9134, Nexus 5000 and blade switches) NPV provides the FC switch s connections (uplink) to act as server connections instead of acting like a standard ISL Utilizes NPIV type functionality to allow multiple server logins from other switch ports to use NP-port uplink 78 Handout Page-39
40 Unified I/O? Consolidation of FC and Ethernet traffic on same infrastructure New protocols (FCoE, Data Center Ethernet ) for guaranteed QoS levels per traffic class FC HBA SAN (FC) FC HBA SAN (FC) CNA SAN (FCoE) LAN (Ethernet) NIC LAN (Ethernet) CNA NIC LAN (Ethernet) 79 Un-Unified I/O Today Today Today Parallel LAN/SAN Infrastructure LAN SAN A SAN B Inefficient use of Network Infrastructure Management 5+ connections per server higher adapter and cabling costs Adds downstream port costs; cap-ex and op-ex Each connection adds additional points of failure in the fabric Longer lead time for server provisioning Multiple fault domains complex diagnostics DCE and FCoE Ethernet FC 80 Management complexity firmware, driver-patching, versioning Handout Page-40
41 Unified I/O Today Management FCoE Switch Unified I/O Phase 1 LAN SAN A SAN B Unified I/O Phase 1 Reduction of server adapters Simplification of access layer and cabling Gateway free implementation fits in installed base of existing LAN and SAN L2 Multipathing Access Distribution Lower TCO Fewer Cables Investment Protection (LANs and SANs) Consistent Operational Model DCE and FCoE Ethernet FC 81 Storage Virtualization Logical Topology Front-End VSAN Pooled resources Back-End VSAN Virtual targets Virtual initiators 82 Handout Page-41
42 Network-Based Volume Management Simplify volume presentation and management Create, delete, change storage volumes Provides front-end LUN Masking and mapping of storage volume to hosts Centralize management and control Single Invista console to manage virtual volumes, clones, and mobility jobs Reduce management complexity of a heterogeneous storage Single management interface to allocate and reallocate storage resources Virtual volumes Applications 83 Physical storage Dynamic Volume Mobility Explained Virtual Volumes Data Path Controlle Data Path Controlle Virtual LUN: 10 Virtualization Hosts see Storage Virtualization as an array Presents virtual volumes to hosts Maps virtual volumes to physical volumes Array: 1 LUN: 20 r Virtual initiators EMC r HDS Array: 2 LUN: 30 To move a volume: Select source and target volumes Network synchronizes the volumes, then changes the virtual-physical mapping No I/O disruption to host 84 Handout Page-42
43 Heterogeneous Point-in-Time Copies Create point-in-time copies Source and clone can be on different, heterogeneous storage arrays Enable replication across heterogeneous storage Leverage existing storage investments Reduce replication storage capacity and management costs Maximize replication benefits to support service levels Backup and recovery Testing, development, and training Parallel processing, reporting, and queries Virtual volume Applications SAN Active volume Clone Clone Clone Physical storage Data 85 Agenda Virtualization: Getting Motivated! Compute Resource Virtualization Network Virtualization Virtualization with VSS Virtualization with Nexus Adding Services Storage Virtualization Data Center Interconnect Conclusion 86 Handout Page-43
44 Problem Statement LAN extensions Data Center A Data Center B Certain Applications require L2 connectivity among peers Clusters (Veritas, MSFT) vmotion Home-brewed apps Within and between Data Centers Uses: Server migrations Disaster recovery and resiliency High rate encryption may require an L2 transport between sites Distributed Active-Active DCs 87 Traditional Layer 2 Data Center Interconnect EoMPLS Dark Fiber VPLS 88 Handout Page-44
45 Traditional Layer 2 DCI: Data Plane MAC Learning x2 Site A Site C MAC 1 MAC 1 propagation Layer 2 VPN technologies use a Data Plane driven learning mechanism. This mechanism is the same as the one used by classical Ethernet bridges. When the frame is received with an unknown source MAC address, the source MAC address is programmed in the bridge table. Site B When a bridge receives a frame and its destination MAC is not in the MAC table, the frame is flooded on the bridge domain. This is referred to as unknown unicast flooding. As the flood travels throughout the entire bridge domain, it triggers learning of its source MAC address over multiple hops. This flooding behavior causes failures to propagate to every site in the L2-VPN 89 Traditional Layer 2 DCI: Circuit Switching Before any learning can happen a full mesh of circuits must be available. Circuits are usually statically predefined. For N sites, there will be N*(N-1)/2 circuits. Operational challenge! Scalability is impacted as the number of sites increases. Head-end replication for multicast and broadcast Complex addition and removal of sites. 90 Handout Page-45
46 Traditional Layer 2 DCI: Loop Prevention Active Active L2 Site L2 VPN L2 Site Coordination between edge devices on the same site is needed. One of the edge devices becomes the designated active device. The designed active device can be at the device level or per VLAN. STP is often extended across the sites of the Layer 2 VPN. Very difficult to manage as the number of sites grows. Malfunctions on one site will likely impact all sites on the VPN. 91 Overlay Transport Virtualization at a Glance Ethernet traffic between sites is encapsulated in IP: MAC in IP Dynamic encapsulation based on MAC routing table No Pseudo-Wire or Tunnel state maintained MAC1 MAC2 IP A IP B MAC1 MAC2 MAC1 MAC2 Encap Decap MAC IF MAC1 Eth1 OTV OTV MAC2 IP B IP A IP B MAC3 IP B Server 1 MAC 1 Communication between MAC1 (site 1) and MAC2 (site 2) Server 2 MAC 2 92 Handout Page-46
47 OTV: MAC Tables OTV uses a protocol to proactively advertise MAC reachability (control-plane learning). We will refer to this protocol as the overlay Routing Protocol (orp). orp runs in the background once OTV has been configured. No configuration is required by the user for orp to operate. Core orp West IP A IP B East IP C 93 South Overlay Transport Virtualization Benefits STP BPDU s not forwarded on overlay network, OTV device participates in STP on campus side Unknown unicasts not forwarded on overlay assumption that no hosts are silent or uni-directional (workarounds if not) Proxy ARP keeps ARP traffic local, reduces overlay broadcast traffic OTV prevents loops from forming via control of device forwarding for a site (VLAN for site OTV edge devices to communicate on) The BPDUs stop here OTV Core 94 CNC-summarized content Handout Page-47
48 Improving Traditional Layer 2 VPNs Data Plane Learning Control Plane Learning Moving to a Control Plane protocol that proactively advertises MAC addresses and their reachability instead of the current flooding mechanism. Circuit Switching Packet Switching No static tunnel or pseudo-wire configuration required A Packet Switched approach would allow for the replication of traffic closer to the destination, which translates into much more efficient bandwidth utilization in the core. Loop Prevention Automatic Multi-homing Ideally a multi-homed solution should allow load balancing of flows within a single VLAN across the active devices in the same site, while preserving the independence of the sites. STP confined within the site (each site with its own STP Root bridge) 95 Overlay Transport Virtualization: Tech Pillars OTV is a MAC in IP technique for supporting Layer 2 VPNs over any transport. Packet Switching No Pseudo-Wire State Maintenance Optimal Multicast Replication Multi-point Connectivity Point-to-Cloud Model Protocol Learning Built-in Loop Prevention Preserve Failure Boundary Seamless Site Addition/Removal Automated Multi-homing 96 Handout Page-48
49 OTV: Egress Routing Localization HSRP hellos can be filtered at the OTV site edge. A single FHRP group will now have an active GWY on each site. No special configuration of the FHRP is required. ARP requests for A are intercepted at the OTV edge to ensure the replies are from the local active GWY. Optimal Egress Router choice. Active GWY Site 1 L3 L2 Active GWY Site 2 ARP traffic is kept local FHRP Hellos West FHRP Hellos East ARP traffic is kept local 97 OTV Configuration The CLI is subject to change prior to FCS. interface Overlay0 description otv-demo otvexternal-interface Ethernet1/1 otvgroup-address data-group-range /32 otvadvertise-vlan otvsite-vlan100 Connect to the core. Used to join the core mcast groups. Their IP addresses are used as source IP for the OTV encap ASM/Bidir group in the core used for orp. SSM group range used to carry the site mcast traffic data. Site VLANs being extended by OTV VLAN used within the Site for communication between the site s Edge Devices 98 Handout Page-49
50 Agenda Virtualization: Getting Motivated! Compute Resource Virtualization Network Virtualization Virtualization with VSS Virtualization with Nexus Adding Services Storage Virtualization Data Center Interconnect Conclusion 99 The Future? Bigger faster ESXi servers IBM has announced Power7 chips (8-way cores) and servers that are claimed to support up to 640 VM s (32 processors x 20 VM s each) Intel Nehalem-EX may be roughly comparable Intel has put out experimental 48-way cores (lower clock rate)? Faster CPU + more cores reduces CPU limitation on # VM s Cisco (and now others) technology reduces memory issues capping # of VM s Do the math: 128 processors x perhaps 30 VM s each? (3840 VM s in a rack?). 50 VM s per? (6400 VM s in a rack??) Fewer, faster network connections Do you use N x 10 G or 40 G or 100 G to such a box? Especially with FCoE thrown in Greatly reduce cable tangle to 6 10 NIC + HBA adapters Further shrink size of chassis 100 CNC content Handout Page-50
51 The Future 2 More SAN VMotion and other desirable techniques require SAN Your business depends on it speed and reliability are key Consistent SAN management practices and SAN virtualization enhance flexibility and reliability SAN de-deplication, SAN-based backup, etc. are the icing on the cake Cloud computing Some mix, low risk servers may well end up in cloud Crucial servers, big DB s, high risk servers remain internal? 101 CNC content Virtualization What is in for me? Virtualization is an overloaded term A collection of technologies to allow a more flexible usage of hardware resources Assembled in an end to end architecture these technologies provide the agility to respond to business requirements 102 CNC content Handout Page-51
52 Summary Virtualization of the network infrastructure improves utilization and offers new deployment models in the data center Flexible service models readily account for application requirements Security is a process not a product; virtualization allows for efficient application of security policies The application is the beneficiary of all these developments 103 CNC content Any Questions? For a copy of the presentation, me at [email protected] I ll post a link in a blog article About Chesapeake Netcraftsmen: Cisco Premier Partner Cisco Customer Satisfaction Excellence rating We wrote the original version of the Express Foundations courses required for VAR Premier Partner status (and took and passed the tests), and the recent major CCDA/CCDP refresh Cisco Advanced Specializations: Advanced Unified Communications (and IP Telephony) Advanced Wireless Advanced Security Advanced Routing & Switching Advanced Data Center Networking Infrastructure Deep expertise in Routing and Switching (several R&S and four double CCIE s) We do network / security / net mgmt / unified communications / data center Design and Assessment 104 CNC content Handout Page-52
53 105 Extra Slides: Virtualization 106 Handout Page-53
54 Servers: One app, one server Focus on reducing footprint Rack form factors (6-20 servers per cabinet) Blade form factors (30-60 servers per cabinet) Helped alleviate some of the footprint issues Power and heat still a problem The more powerful the CPU The lower server utilization! Average server utilization ranges between 4 10% Still one application per server 107 Servers: Virtualization Is the Key Apply Mainframe Virtualization Concepts to x86 Servers: Use virtualization software to partition an Intel / AMD server to work with several operating system and application instances Database Web Application Servers File Print DNS LDAP Deploy several virtual machines on one server using virtualization software 108 Handout Page-54
55 Virtualization Landscape Consolidation Scaling Services Unified Communication Content Network WAN Acceleration Services Call control Web Server Video Server File Server Network VLAN - MPLS VPN Virtual Switch - VRF Network HSRP/GLBP/VRRP VSS WCCP Compute Server Virtualization Virtual Appliances Virtual Context Compute SLB VIP Unified Computing Cloud Computing 109 Storage VSAN vhba NPIV Storage Service Profiles Logical Volumes Logical Physical Logical Several Ways to Virtualize Container-based Linux VServer Paravirtualization Xen, VMware ESX (device drivers), Microsoft Hyper-V Host-based Microsoft Virtual Server, VMware Server and Workstation Native ( Full ) virtualization VMware ESX, Linux KVM, Microsoft Hyper-V, Xen 110 Handout Page-55
56 Extra Slides: Network Virtualization 111 What Is Network Virtualization? Overlay of logical topologies (1:N) One physical network supports N virtual networks Outsourced IT Department Quality Assurance Network Sandboxed Department (Regulatory Compliance) Virtual Topology 1 Virtual Topology 2 Virtual Topology 3 Physical Topology 112 Handout Page-56
57 Nexus 7000 Series Virtual Device Contexts (VDCs) Virtualization of the Nexus 7000 Series Chassis Up to 4 separate virtual switches from a single physical chassis with common supervisor module(s) Separate control plane instances and management/cli for each virtual switch Interfaces only belong to one of the active VDCs in the chassis, external connectivity required to pass traffic between VDCs of the same switch Designing with VDCs VDCs serve a role in the topology similar to a physical switch; core, aggregation, or access Multiple VDC example topologies have been validated within Cisco by ESE and other teams Two VDCs from the same physical switch should not be used to build a redundant network layer physical redundancy is more robust 113 Virtualization Inside a VDC Nexus VDC VDC VLAN VLAN VLAN VLAN VLAN VLAN VLAN VLAN VLAN VLAN VLAN VLAN VRF VRF VRF VRF VRF VRF VRF VRF VRF VRF VRF VRF Scalability: 4K VLANs/VDC 256 VRFs/VDC 4 VDCs VLAN VLAN VLAN VRF VRF VRF VLAN VLAN VLAN VRF VRF VRF VDC VLAN VLAN VLAN VRF VRF VRF VLAN VLAN VLAN VRF VRF VRF VLAN VLAN VLAN VRF VRF VRF 114 Handout Page-57
58 Extra Slides: Data Center Service Insertion 115 Data Center Service Insertion: Direct Services Appliances Appliances directly connected to the aggregation switches Service device type and Routed or Transparent mode can affect physical cabling and traffic flows. Transparent mode ASA example: Each ASA dependant on one aggregation switch Separate links for fault tolerance and state traffic either run through aggregation or directly Dual-homed with interface redundancy feature is an option Currently no EtherChannel supported on ASA Data Center Core Aggregation Services Access 116 Handout Page-58
59 Data Center Service Insertion: External Services Chassis Dual-homed Catalyst 6500 Services do not depend on a single aggregation switch Direct link between chassis for fault-tolerance traffic, may alternatively trunk these VLANs through Aggregation Dedicated integration point for multiple data center service devices Provides slot real estate for 6500 services modules Firewall Services Module (FWSM) Application Control Engine (ACE) Module Other services modules, also beneficial for appliances Data Center Core Aggregation Services Access 117 ISV Network View 118 Handout Page-59
60 Physical Solution Topology Catalyst 6500 Catalyst 6500 Aggregation Layer Nexus 7000 Nexus 7000 ACE Module ASA 5580 ASA 5580 ACE Module WAAS ACE WAF Services Layer IDS/IPS IDS/IPS Access Layer Catalyst 6500s Catalyst 4900s Catalyst 6500s 119 VSS Nexus 5000s Catalyst 3100 VBS Data Center Design Aggregation Layer DMZ Redundant physical chassis provide virtual platform Physical interfaces allocated to independent VDCs and ASA virtual contexts Fault tolerance and state VLANs leverage VDC2 Po99 Nexus 7000 Nexus 7000 Eth5/0 VLAN 161 Eth1/3 Eth1/3 VLAN 161 Eth5/0 ASA Eth3/1 Eth2/1 Eth2/1 Eth3/1 ASA VLAN 172 VLAN 172 Eth3/0 VLAN 171 Eth2/3 Po99 Eth2/3 VLAN 171 Eth3/0 Eth5/1 VLAN 162 Eth1/5 Eth1/5 VLAN 162 Eth5/1 VLAN 172 State VLAN VLAN 171 Failover 120 VLAN Handout Page-60
61 Active-Active Solution Virtual Components Nexus 7000 VDCs, VRFs, SVIs ASA 5580 Virtual Contexts ACE Service Module Virtual Contexts, Virtual IPs (VIPs) IPS 4270 Virtual Sensors Virtual Access Layer Virtual Switching System Nexus 1000v Virtual Blade Switching Nexus 7000 ASA ACE IPS/IDS (VDC max = 4) (ASA max = 50 VCs) (FWSM max = 250) (ACE max = 250 VCs) (ACE 4710 = 20 VCs) (VS max = 4) 121 Service Pattern Server-to-Server Intra-VRF Servers Default Gateway N7k1-VDC2 STP root hsrp.1 vrf1 Po99 N7k2-VDC2 vrf1 Srv-A Srv-B Flow 1 Flow 2 Flow 3 Srv-C Srv-D 122 Handout Page-61
62 Service Pattern Server-to-Server Inter-VRF ASA1 N7k1-VDC1 Po99 N7k2-VDC1 ASA2 ASA2 OSPF NSSA Area 81 ASA ACE1 vrf1 vrf2 vrf2 vrf1 ACE2 SS1 N7k1-VDC2 Po99 N7k2-VDC2 SS2 Srv-A Srv-B 123 Srv-C Srv-D Active-Active Solution Logical Topology RID: x/ RID: Cat6k OSPF Area 0 Cat6k x/24 RID: (SVI 3) x/24 (SVI 3).2.2 RID: Layer 2 Service Domain ASA1 ASA / / /24 Po /24 N7k1-VDC1 N7k2-VDC1 OSPF NSSA Area 81 ASA1 ASA ACE1 SS1 vrf1 vrf2 RID: RID: N7k1-VDC2 Po vrf2 vrf1 RID: RID: N7k2-VDC2 ACE2 SS2 Handout Page-62
63 Service Flow Client-to-Server Example 2 WAAS FARM class-map match-all ANY_TCP 2 match virtual-address tcp any 4 VLAN ASA1 1 VLAN 161 VLAN SVI SVI hsrp.1 WAF Cluster IPS1 vs0 IPS2 vs0 VLAN VLANs 163,164 Po2 VLAN 163 ACE SS1 Bridged VLANs 3 VLAN 162 VLAN N7k1-VDC2 hsrp.7 vrf1 vrf Server VLANs 8 Bridged VLANs Interface VLAN 162 IP: Input Service Policy: AGGREGATE_SLB_POLICY VIP: Service Flow Client-to-Server Example 1 ASA1 1 VLAN 161 SVI hsrp.1 Interface VLAN 190 IP: Input Service Policy: L4_LB_VIP_HTTP_POLICY VIP: VLAN SVI WAF Cluster IPS1 vs0 IPS2 vs0 WAF Devices IP: IP: VLAN VLANs 163,164 Po2 VLAN 163 ACE SS1 Bridged VLANs 3 VLAN 162 VLAN N7k1-VDC2 hsrp.7 vrf1 vrf Server VLANs 8 Bridged VLANs Interface VLAN 162 IP: Input Service Policy: AGGREGATE_SLB_POLICY VIP: Handout Page-63
64 Service Pattern Intra-VRF with Services Servers Default Gateway vrf1 N7k1-VDC2 STP root hsrp.1 Po99 N7k2-VDC2 vrf1 VLAN 141 VLAN 141 E2/38 E2/37 E2/37 E2/38 Srv-A ASA Virtual Context allocated on same pair of physical ASAs ASA1-vc3 VLAN 142 VLAN 142 Oracle DB ASA2-vc3 Bond142: Extra Slides: SAN Virtualization 128 Handout Page-64
65 Nested NPIV When NP port comes up on a NPV edge switch, it first FLOGI and PLOGI into the core to register into the FC Name Server End Devices connected on NPV edge switch does FLOGI but NPV switch converts FLOGI to FDISC command, creating a virtual PWWN for the end device and allowing to login using the physical NP port. NPIV capable devices connected on NPV switch will continue FDISC login process for all virtual PWWN which will go through same NP port as physical end device NPV-Core Switch F F NP P1NP P2 NPV Edge Switch F F P3 = vp1 P4 = vp5 vp2 vp3 vp4 vp6 vp7 vp8 129 SAN-Based Storage Virtualization Virtual volumes Meta-Data Meta-Data Multi-vendor arrays Performance architecture Leverages next-generation intelligent SAN switches Scalable architecture Split-path architecture for high performance A stateless virtualization architecture does not store any information written by the application. High speed, high throughput data mapping Purpose-built ASICs (DPP) that handle and redirect I/O at line speed, with almost no additional latency Based on instructions provided by the Meta- Data Appliances Provides advanced functionality Supports heterogeneous environments 130 Handout Page-65
66 Extra Slides: Data Center Interconnect and OTV 131 DC Interconnect LAN Extension VSS Over Dark Fiber Multiple DC s Site A Site B Assumes dark fiber between sites Distance limitations are given by DWDM Number of sites can be 2 or more Add 2 switches in main data centers Switches use separate lambda to interconnect These switches will form a VSS VSL is 10Gbps Site D * DWDM X2 availability 12.2(33)SXI Site C 132 Handout Page-66
67 OTV and Unicast OTV Data Plane: Unicast Intra-Site Traffic MAC 2 Layer 2 Lookup OTV MAC TABLE VLA MAC 100 N MAC MAC 2 IF Eth 2 Eth 1 OTV MAC 4 Eth 1 Eth 2 IP A IP B Eth 1 Eth 2 MAC 1 MAC 2 L2 L3 L3 L2 MAC 1 West Core East MAC OTV Transport for Unicast OTV Inter-Site Traffic MAC 2 1 Layer 2 Lookup OTV MAC TABLE VLA 100 N MAC MAC MAC MAC MAC 4 IF Eth 2 Eth 1 IP B IP B MAC Table contains MAC addresses reachable through IP addresses 5 Layer 2 Lookup OTV MAC TABLE VLA 100 N MAC MAC MAC MAC MAC 4 IF IP A IP A Eth 3 Eth 4 MAC 4 MAC 1 MAC 1 MAC 3 West Eth 1 Eth 2 IP A MAC 1 MAC 3 Core IP A IP B IP B Eth 4 Eth 3 MAC 1 MAC MAC 1 3 MAC IP 3 A IP B L2 L3 L3 L2 2 4 MAC 1 MAC 3 3 Encap Decap East 6 MAC 3 No Pseudo-Wire state is maintained. The encapsulation is done 134 based on a destination lookup, rather than based on a circuit lookup. Handout Page-67
68 OTV Scalability Targets for the (First) Release Multi-Dimensional Uni-dimensional Overlays 3 64 Number of sites 3 10 VLANs per Overlay MACs across all sites 25K 32K MACs on each site 8K 8K Multicast Data Groups DISCLAIMER These are targets and are still subject to additional testing Handout Page-68
Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center
Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center [email protected] www.globalknowledge.net Planning for the Redeployment of
Next Generation Data Center Networking.
Next Generation Data Center Networking. Intelligent Information Network. עמי בן-עמרם, יועץ להנדסת מערכות [email protected] Cisco Israel. 1 Transparency in the Eye of the Beholder With virtualization, s
Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**
Course: Duration: Price: $ 4,295.00 Learning Credits: 43 Certification: Implementing and Troubleshooting the Cisco Cloud Infrastructure Implementing and Troubleshooting the Cisco Cloud Infrastructure**Part
Data Center Virtualization
Data Center Virtualization René Raeber CE Datacenter Central Consulting Advanced Technologies/DC Setting the stage: What s the meaning of virtual? If you can see it and it is there It s real If you can
How To Set Up A Virtual Network On Vsphere 5.0.5.2 (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan)
Best Practices for Virtual Networking Karim Elatov Technical Support Engineer, GSS 2009 VMware Inc. All rights reserved Agenda Best Practices for Virtual Networking Virtual Network Overview vswitch Configurations
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea ([email protected]) Senior Solutions Architect, Brocade Communications Inc. Jim Allen ([email protected]) Senior Architect, Limelight
CCNA DATA CENTER BOOT CAMP: DCICN + DCICT
CCNA DATA CENTER BOOT CAMP: DCICN + DCICT COURSE OVERVIEW: In this accelerated course you will be introduced to the three primary technologies that are used in the Cisco data center. You will become familiar
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain
Next Gen Data Center. KwaiSeng Consulting Systems Engineer [email protected]
Next Gen Data Center KwaiSeng Consulting Systems Engineer [email protected] Taiwan Update Feb 08, kslai 2006 Cisco 2006 Systems, Cisco Inc. Systems, All rights Inc. reserved. All rights reserved. 1 Agenda
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches
Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is
Network Virtualization
Network Virtualization Petr Grygárek 1 Network Virtualization Implementation of separate logical network environments (Virtual Networks, VNs) for multiple groups on shared physical infrastructure Total
The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer
The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration
DCICT: Introducing Cisco Data Center Technologies
DCICT: Introducing Cisco Data Center Technologies Description DCICN and DCICT will introduce the students to the Cisco technologies that are deployed in the Data Center: unified computing, unified fabric,
N_Port ID Virtualization
A Detailed Review Abstract This white paper provides a consolidated study on the (NPIV) feature and usage in different platforms and on NPIV integration with the EMC PowerPath on AIX platform. February
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation
Cisco Cloud Essentials for EngineersV1.0 LESSON 1 Cloud Architectures TOPIC 1 Cisco Data Center Virtualization and Consolidation 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...
Building the Virtual Information Infrastructure
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Cloud Computing and the Internet. Conferenza GARR 2010
Cloud Computing and the Internet Conferenza GARR 2010 Cloud Computing The current buzzword ;-) Your computing is in the cloud! Provide computing as a utility Similar to Electricity, Water, Phone service,
ANZA Formación en Tecnologías Avanzadas
Temario INTRODUCING CISCO DATA CENTER TECHNOLOGIES (DCICT) DCICT is the 2nd of the introductory courses required for students looking to achieve the Cisco Certified Network Associate certification. This
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
VMware NSX Network Virtualization Design Guide. Deploying VMware NSX with Cisco UCS and Nexus 7000
VMware NSX Network Virtualization Design Guide Deploying VMware NSX with Cisco UCS and Nexus 7000 Table of Contents Intended Audience... 3 Executive Summary... 3 Why deploy VMware NSX on Cisco UCS and
Implementing Cisco Data Center Unified Fabric Course DCUFI v5.0; 5 Days, Instructor-led
Implementing Cisco Data Center Unified Fabric Course DCUFI v5.0; 5 Days, Instructor-led Course Description The Implementing Cisco Data Center Unified Fabric (DCUFI) v5.0 is a five-day instructor-led training
A Platform Built for Server Virtualization: Cisco Unified Computing System
A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease
Virtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
Data Center Design IP Network Infrastructure
Cisco Validated Design October 8, 2009 Contents Introduction 2 Audience 3 Overview 3 Data Center Network Topologies 3 Hierarchical Network Design Reference Model 4 Correlation to Physical Site Design 5
Course. Contact us at: Information 1/8. Introducing Cisco Data Center Networking No. Days: 4. Course Code
Information Price Course Code Free Course Introducing Cisco Data Center Networking No. Days: 4 No. Courses: 2 Introducing Cisco Data Center Technologies No. Days: 5 Contact us at: Telephone: 888-305-1251
How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
Outline VLAN. Inter-VLAN communication. Layer-3 Switches. Spanning Tree Protocol Recap
Outline Network Virtualization and Data Center Networks 263-3825-00 DC Virtualization Basics Part 2 Qin Yin Fall Semester 2013 More words about VLAN Virtual Routing and Forwarding (VRF) The use of load
Expert Reference Series of White Papers. VMware vsphere Distributed Switches
Expert Reference Series of White Papers VMware vsphere Distributed Switches [email protected] www.globalknowledge.net VMware vsphere Distributed Switches Rebecca Fitzhugh, VCAP-DCA, VCAP-DCD, VCAP-CIA,
Nutanix Tech Note. VMware vsphere Networking on Nutanix
Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c
Next-Gen Securitized Network Virtualization
Next-Gen Securitized Network Virtualization Effective DR and Business Continuity Strategies Simplify when the lights go out www.ens-inc.com Your premiere California state government technology provider.
Top-Down Network Design
Top-Down Network Design Chapter Five Designing a Network Topology Copyright 2010 Cisco Press & Priscilla Oppenheimer Topology A map of an internetwork that indicates network segments, interconnection points,
How To Design A Data Centre
DATA CENTRE TECHNOLOGIES & SERVICES RE-Solution Data Ltd Reach Recruit Resolve Refine 170 Greenford Road Harrow Middlesex HA1 3QX T +44 (0) 8450 031323 EXECUTIVE SUMMARY The purpose of a data centre is
Course Contents CCNP (CISco certified network professional)
Course Contents CCNP (CISco certified network professional) CCNP Route (642-902) EIGRP Chapter: EIGRP Overview and Neighbor Relationships EIGRP Neighborships Neighborship over WANs EIGRP Topology, Routes,
Direct Attached Storage
, page 1 Fibre Channel Switching Mode, page 1 Configuring Fibre Channel Switching Mode, page 2 Creating a Storage VSAN, page 3 Creating a VSAN for Fibre Channel Zoning, page 4 Configuring a Fibre Channel
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Chapter 4: Spanning Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels
Design Guide Chapter 4: Spanning Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led
Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led Course Description Configuring Cisco Nexus 5000 Switches (DCNX5K) v2.1 is a 5-day ILT training program that is designed
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Virtual Private LAN Service on Cisco Catalyst 6500/6800 Supervisor Engine 2T
White Paper Virtual Private LAN Service on Cisco Catalyst 6500/6800 Supervisor Engine 2T Introduction to Virtual Private LAN Service The Cisco Catalyst 6500/6800 Series Supervisor Engine 2T supports virtual
NX-OS and Cisco Nexus Switching
NX-OS and Cisco Nexus Switching Next-Generation Data Center Architectures Kevin Corbin, CCIE No. 11577 Ron Fuller, CCIE No. 5851 David Jansen, CCIE No. 5952 Cisco Press 800 East 96th Street Indianapolis,
VMDC 3.0 Design Overview
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
Network Virtualization Network Admission Control Deployment Guide
Network Virtualization Network Admission Control Deployment Guide This document provides guidance for enterprises that want to deploy the Cisco Network Admission Control (NAC) Appliance for their campus
Expert Reference Series of White Papers. Cisco Data Center Ethernet
Expert Reference Series of White Papers Cisco Data Center Ethernet 1-800-COURSES www.globalknowledge.com Cisco Data Center Ethernet Dennis Hartmann, Global Knowledge Senior Instructor, CCIE, CCVP, CSI,
RESILIENT NETWORK DESIGN
Matěj Grégr RESILIENT NETWORK DESIGN 1/36 2011 Brno University of Technology, Faculty of Information Technology, Matěj Grégr, [email protected] Campus Best Practices - Resilient network design Campus
Cisco Nexus 1000V Switch for Microsoft Hyper-V
Data Sheet Cisco Nexus 1000V Switch for Microsoft Hyper-V Product Overview Cisco Nexus 1000V Switches provide a comprehensive and extensible architectural platform for virtual machine and cloud networking.
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01
vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
Cisco Certified Network Associate Exam. Operation of IP Data Networks. LAN Switching Technologies. IP addressing (IPv4 / IPv6)
Cisco Certified Network Associate Exam Exam Number 200-120 CCNA Associated Certifications CCNA Routing and Switching Operation of IP Data Networks Operation of IP Data Networks Recognize the purpose and
Designing Cisco Network Service Architectures ARCH v2.1; 5 Days, Instructor-led
Designing Cisco Network Service Architectures ARCH v2.1; 5 Days, Instructor-led Course Description The Designing Cisco Network Service Architectures (ARCH) v2.1 course is a five-day instructor-led course.
How To Learn Cisco Cisco Ios And Cisco Vlan
Interconnecting Cisco Networking Devices: Accelerated Course CCNAX v2.0; 5 Days, Instructor-led Course Description Interconnecting Cisco Networking Devices: Accelerated (CCNAX) v2.0 is a 60-hour instructor-led
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Remote PC Guide Series - Volume 1
Introduction and Planning for Remote PC Implementation with NETLAB+ Document Version: 2016-02-01 What is a remote PC and how does it work with NETLAB+? This educational guide will introduce the concepts
Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing
White Paper Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing What You Will Learn The data center infrastructure is critical to the evolution of IT from a cost center to a business
Chapter 3. Enterprise Campus Network Design
Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This
CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE
CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE EXECUTIVE SUMMARY This application note proposes Virtual Extensible LAN (VXLAN) as a solution technology to deliver departmental segmentation, business
ADVANCED NETWORK CONFIGURATION GUIDE
White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4
"Charting the Course...
Description "Charting the Course... Course Summary Interconnecting Cisco Networking Devices: Accelerated (CCNAX), is a course consisting of ICND1 and ICND2 content in its entirety, but with the content
Alexander Paul [email protected] IBM Certified Advanced Technical Expert (C.A.T.E.) for Power Systems Certified Cisco Systems Instructor CCSI #32044
Network Virtualization Deep dive and Network Troubleshooting in a virtualized Environment Alexander Paul [email protected] IBM Certified Advanced Technical Expert (C.A.T.E.) for Power Systems Certified
VMware Virtual Networking Concepts I N F O R M A T I O N G U I D E
VMware Virtual Networking Concepts I N F O R M A T I O N G U I D E Table of Contents Introduction... 3 ESX Server Networking Components... 3 How Virtual Ethernet Adapters Work... 4 How Virtual Switches
NEXT GENERATION VIDEO INFRASTRUCTURE: MEDIA DATA CENTER ARCHITECTURE Gene Cannella, Cisco Systems R. Wayne Ogozaly, Cisco Systems
NEXT GENERATION VIDEO INFRASTRUCTURE: MEDIA DATA CENTER ARCHITECTURE Gene Cannella, Cisco Systems R. Wayne Ogozaly, Cisco Systems Abstract Service providers seek to deploy nextgeneration interactive, immersive,
Walmart s Data Center. Amadeus Data Center. Google s Data Center. Data Center Evolution 1.0. Data Center Evolution 2.0
Walmart s Data Center Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall emester 2013 1 2 Amadeus Data Center Google s Data Center 3 4 Data Center
PR03. High Availability
PR03 High Availability Related Topics NI10 Ethernet/IP Best Practices NI15 Enterprise Data Collection Options NI16 Thin Client Overview Solution Area 4 (Process) Agenda Overview Controllers & I/O Software
Interconnecting Cisco Networking Devices Part 2
Interconnecting Cisco Networking Devices Part 2 Course Number: ICND2 Length: 5 Day(s) Certification Exam This course will help you prepare for the following exam: 640 816: ICND2 Course Overview This course
Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
Enterprise-Class Virtualization with Open Source Technologies
Enterprise-Class Virtualization with Open Source Technologies Alex Vasilevsky CTO & Founder Virtual Iron Software June 14, 2006 Virtualization Overview Traditional x86 Architecture Each server runs single
VMware Virtual Infrastucture From the Virtualized to the Automated Data Center
VMware Virtual Infrastucture From the Virtualized to the Automated Data Center Senior System Engineer VMware Inc. [email protected] Agenda Vision VMware Enables Datacenter Automation VMware Solutions
Cisco Nexus 1000V Virtual Ethernet Module Software Installation Guide, Release 4.0(4)SV1(1)
Cisco Nexus 1000V Virtual Ethernet Module Software Installation Guide, Release 4.0(4)SV1(1) September 17, 2010 Part Number: This document describes how to install software for the Cisco Nexus 1000V Virtual
Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led
Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led Course Description Understanding Cisco Cloud Fundamentals (CLDFND) v1.0 is a five-day instructor-led training course that is designed
Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager
Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager What You Will Learn This document describes the operational benefits and advantages of firmware provisioning with Cisco UCS Manager
Network Troubleshooting & Configuration in vsphere 5.0. 2010 VMware Inc. All rights reserved
Network Troubleshooting & Configuration in vsphere 5.0 2010 VMware Inc. All rights reserved Agenda Physical Network Introduction to Virtual Network Teaming - Redundancy and Load Balancing VLAN Implementation
Stretched Active- Active Application Centric Infrastructure (ACI) Fabric
Stretched Active- Active Application Centric Infrastructure (ACI) Fabric May 12, 2015 Abstract This white paper illustrates how the Cisco Application Centric Infrastructure (ACI) can be implemented as
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG North Core Distribution Access South North Peering #1 Upstream #1 Series of Tubes Upstream #2 Core Distribution Access Cust South Internet West
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
1 Oracle Virtual Networking: Data Center Fabric for the Cloud Sébastien Grotto Oracle Virtual Networking Specialist Optimisez et Virtualisez vos Infrastructures DataCenter avec Oracle 4 juin 2013 Why Data
Data Center Multi-Tier Model Design
2 CHAPTER This chapter provides details about the multi-tier design that Cisco recommends for data centers. The multi-tier design model supports many web service architectures, including those based on
Ethernet Fabrics: An Architecture for Cloud Networking
WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic
vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02
vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
FIBRE CHANNEL OVER ETHERNET
FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,
Configuring the Transparent or Routed Firewall
5 CHAPTER This chapter describes how to set the firewall mode to routed or transparent, as well as how the firewall works in each firewall mode. This chapter also includes information about customizing
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the
Virtualized Access Layer. Petr Grygárek
Virtualized Access Layer Petr Grygárek Goals Integrate physical network with virtualized access layer switches Hypervisor vswitch Handle logical network connection of multiple (migrating) OS images hosted
Networking Topology For Your System
This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.
Data Centre of the Future
Data Centre of the Future Vblock Infrastructure Packages: Accelerating Deployment of the Private Cloud Andrew Smallridge DC Technology Solutions Architect [email protected] 1 IT is undergoing a transformation
Cisco Data Center Network Manager Release 5.1 (LAN)
Cisco Data Center Network Manager Release 5.1 (LAN) Product Overview Modern data centers are becoming increasingly large and complex. New technology architectures such as cloud computing and virtualization
NET ACCESS VOICE PRIVATE CLOUD
Page 0 2015 SOLUTION BRIEF NET ACCESS VOICE PRIVATE CLOUD A Cloud and Connectivity Solution for Hosted Voice Applications NET ACCESS LLC 9 Wing Drive Cedar Knolls, NJ 07927 www.nac.net Page 1 Table of
Cisco Prime Network Services Controller. Sonali Kalje Sr. Product Manager Cloud and Virtualization, Cisco Systems
Cisco Prime Network Services Controller Sonali Kalje Sr. Product Manager Cloud and Virtualization, Cisco Systems Agenda Cloud Networking Challenges Prime Network Services Controller L4-7 Services Solutions
CCNP SWITCH: Implementing High Availability and Redundancy in a Campus Network
CCNP SWITCH: Implementing High Availability and Redundancy in a Campus Network Olga Torstensson SWITCHv6 1 Components of High Availability Redundancy Technology (including hardware and software features)
How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright
Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
DMZ Virtualization Using VMware vsphere 4 and the Cisco Nexus 1000V Virtual Switch
DMZ Virtualization Using VMware vsphere 4 and the Cisco Nexus 1000V Virtual Switch What You Will Learn A demilitarized zone (DMZ) is a separate network located in the neutral zone between a private (inside)
Analysis of Network Segmentation Techniques in Cloud Data Centers
64 Int'l Conf. Grid & Cloud Computing and Applications GCA'15 Analysis of Network Segmentation Techniques in Cloud Data Centers Ramaswamy Chandramouli Computer Security Division, Information Technology
