Data Center Networking Jaime Lopez Director Comercial, Gobierno Federal
Challenges Facing IT and Hosting Providers The Challenge Deliver cloud-scale networks that provide scale, openness and simplicity in a cost-effective architecture User and Device Mobility Multi-vendor Operation to Enable Best of Breed IT Budgets and Move from CAPEX to OPEX 2 Application Mobility Simplicity and Lower Operating Costs
Cloud-Scale Data Center: The New Computer Cloud Ready/Virtualization On Demand Provisioning Hardware Independence / High Availability Automation Consolidation High Computational Density Physical Location Consolidation Reducing Data Center Tiers Cloud Scale 10G/40G/100G Low Latency / Low Oversubscription Green Efficient Power Management 3
Gartner: 5 trends that will transform the data center The demand for IT as a service, combined with the need to reduce costs, has pushed data centres to the brink of a second transformation Hybrid IT: Perhaps the greatest effect of public cloud computing on IT concerns operations. IT organizations realize that not only do they need to compete with public cloud service providers (CSPs), but also act as intermediaries between internal customers and all IT services (internal or external). Internal clouds: When businesses grow accustomed to consuming IT as a service, IT organizations will be compelled to build internal clouds. Unfortunately, building an internal cloud is hard work and few blueprints exist. Hybrid clouds: Hybrid clouds are connections between two clouds, usually an internal private cloud and an external public cloud User-centric computing: To compete in a global market and retain key employees, organizations often have to accommodate staff who live in remote locations and use personal devices for work. Data center efficiency: Competing with the external cloud requires IT organizations to strive for hyper-efficiency in their data centers. If critical data and applications are to be housed in an internal private cloud, IT organizations must deliver internal IT services in an efficient, cost-effective manner. The new data centre: In 2012, the data centre will continue its transformation from a traditional, virtualised, consolidated and centralised IT infrastructure into a service-oriented and economically efficient internal cloud. 4
Open Fabric Architecture Best-of-Breed Data Center Switching Agenda - Extreme Networks Data Center Solution
Extreme Networks OpenFabric A Fabric Popular term to describe data center networks High-speed, low-latency, multi-path mesh, and resilient Managed as a single entity May be open or closed (Juniper QFabric) OpenFabric Provides price-performance, interoperability, flexibility, investment protection, and scalability Targeted at cloud data center providers (Internal or External) where infrastructure is cost No vendor lock-in 6
Extreme Networks Open Fabric Facts FABRIC-IN-A-BOX 768 10GbE & 192 40GbE Latency 2.3 µsec Any-to-Any Port 5W per 10GbE Port FABRIC-SCALABILITY 4500 + 10GbE Latency 4.1 µsec Any-to-Any Port 5W per 10GbE Port Best-of-Breed Hardware Platforms Automation & Virtualization Intelligence Standards- Based, Open & Interoperable Open Ecosystem Data Center Bridging M-LAG Direct Attach /VEPA XNV OpenFlow* Powered by Single ExtremeXOS 7
Scalable Data Center Architectures Reducing Network Tiers Agenda - Extreme Networks Data Center Solution
Data Centre Network Architecture Challenges Data Centre Core 50% of All Data Center Network Ports Sold Connect to Other Network Ports Aggregation Access Blade Switch VSwitch VSwitch Appliance 9
Standard Data Center Architecture Sample 12 Rack DC Network will have 5 Layers! Core Switch 384 vswitches 24 Blade Switches 12 ToR Switches 2 Agg Switches 2 Core Switches Aggregation Switch ToR Switch Blade Switch Blade Chassis 12 Racks Blade Chassis A total of 424 Switches!!! VSwitch VSwitch VM VM Blade Chassis 384 Blade Servers Blade Chassis
Direct Attach Eliminate the vswitch Virtually Reducing Network Tiers Data Center Core Minimal traffic provisioning (if any) is done at the vswitch. vswitch VM1 VM2 Today s Inter-VM Switching 11
Direct Attach Eliminate the vswitch Virtually Reducing Network Tiers Inter-VM traffic is transmitted and received on the same network physical port. VM2 CPU and network utilization severely impacted, due to DoS attack. Direct Attach Enabled Switch CLEAR-Flow enabled to dynamically provision/block DoS traffic. VM2 CPU and network utilization reverts to healthy. Guest OS: Ubuntu Active applications: gnome-systemmonitor for network and CPU utilization hping to generate DoS attack targeted at VM2 Guest OS: Ubuntu Active applications: gnome-systemmonitor for network and CPU utilization tcpdump to monitor attack traffic from VM1 VM1 VM2 Host: Fedora 12 Hypervisor: QEMU-KVM 12
Direct Attach Architecture Sample 12 Sample Rack DC 12 Rack Network DC w/ Network Direct Attach will have Architecture: 5 Layers! 2 Tiers Only 768 vswitches 024 vswitches Blade Switches 012 Blade TOR Switches 02 Agg ToR Switches 2 Agg Core Switches 2 Core Switches 4 switches vs. A total of 808 Switches!!! 424 switches Core Core Switch Switch Aggregation Aggregation Switch Switch TOR Switch Blade Switch VSwitch VSwitch VM MRJ21 Passive Patch Panel VM 384 Blade Servers Blade Chassis Blade Chassis 12 Racks Blade Chassis Blade Chassis 13
Extreme Networks Open Fabric Data Center in a Box BlackDiamond X8* Data Center in a Box Single Tier Physical and Logical Network Supports Up to 768 10 GbE Servers in a Single Switch Supports 128,000 Virtual Machines in a Single Switch <3 µsec Latency Heterogeneous Hypervisor Integration M-LAG Support for Multi-path Capability VEPA Support Moving Switching Back to the Network Data Center Bridging for data and storage integration XNV (ExtremeXOS Network Virtualization) for VM Mobility Management Data Center Bridging M-LAG Direct Attach / VEPA XNV OpenFlow* ExtremeXOS 14 * Future availability.
Extreme Networks Open Fabric 40G Standards Based Up to 2,256 10 GbE Servers with only 3x over-subscription 2x BlackDiamond X8* 94 x Summit X670* Data Center Bridging M-LAG Direct Attach / VEPA XNV OpenFlow* 15 ExtremeXOS * Future availability.
Mobility Dynamic Resource Allocation Agenda - Extreme Networks Data Center Solution
Network Mobility of Virtual Machines Make the network VM aware Hypervisor independent Zero-touch network provisioning Dynamic Virtual Port Profiles across the infrastructure 17
VM Mobility Issues Today Network has Zero Visibility into VM Lifecycle Server Admin Initiate Network Admin Virtual Machine Manager Switch Port Config IP: 1.1.1.2 MAC: 00:0A QoS: QP7 ACL: Deny HTTP Switch Port Config None or Disabled Result: When a virtual machine move occurs automatically or initiated by server admin, the network admin has NO visibility into VM location or when the movement occur NIC VM1 IP: 1.1.1.2 MAC: 00:0A Hypervisor NIC Hypervisor The VM moves to a destination switch port that is incorrectly configured to deliver network services to the specific VM 18
Extreme Networks XNV Network Visibility into VM Lifecycle Query Location-based VM awareness at the network level for efficient virtual machine mobility Server Admin Initiate Network Admin Virtual Machine Manager VM info Switch Port Config Switch Port Config Virtual Port Profile IP: IP: 1.1.1.2 MAC: 00:0A QoS: QP7 ACL: Deny HTTP XNV -enabled XNV-enabled Switch Port Config None or Disabled Ridgeline : Through XML integration Pull Inventory from virtual machine manager Locate VMs on network switches Show Inventory VM Switch Port Mapping Define Virtual Port Profile (VPP) Assign (VPP) to VMs and Distribute Respond to VM motion occurrences 19 NIC VM1 IP: 1.1.1.2 MAC: 00:0A Hypervisor NIC Hypervisor Result: Both the VM and the Virtual Port Profile moves to the destination switch port. Networklevel visibility into VM movement is achieved to deliver better SLA.
ExtremeXOS Automation Ridgeline provisions across multiple Extreme Networks switches and integrates with hypervisor management Tightly integrates with virtualization management platforms XML based API Centralized network-level inventory Network-level insight and control of virtualization 20
Storage Convergence Leverage Ethernet to reduce cost and complexity Agenda - Extreme Networks Data Center Solution
Network and Storage Convergence Block Based Storage File Based Storage iscsi FCoE NFS CIFS ExtremeXOS Infrastructure Layer Data Center Bridging (DCB) Protocols 22 Priority Based Flow Control (PFC) Enhanced Transmission Standard(ETS ) DCB Capabilities Exchange (DCBX) Dynamic Scripting ClearFlo w
Convergence to SAN Edge Reference Architecture BlackDiamond X8 Core Switch BDX 8 LAGs M-LAGs BDX 8 BlackDiamond X8 Core Switch LAG Summit X670 TOR Switch LAG LAG M-LAGs LAG Summit X670 TOR Switch QLogic UA5900 Converged Switch LAG LAG LAG LAG NIC LAG NIC CNA LAG CNA FCoE Any iscsi NAS FC FC SAN SAN Servers Servers
Extreme Networks Open Architecture to provide best of breed Scale to address the demands of the cloud Mobile to enable dynamic resource allocation Automate to create zero-touch services
Product Roadmap This product roadmap represents Extreme Networks current product direction. All product releases will be on a when-and-if available basis. Actual feature development and timing of releases will be at the sole discretion of Extreme Networks. Not all features are supported on all platforms. Presentation of the product roadmap does not create a commitment by Extreme Networks to deliver a specific feature. Contents of this roadmap are subject to change without notice. 25
Data Centre Switching Portfolio Small-Large Data Centre Small-Mid Data Centre Mid-Large Data Centre SUMMIT X670 & 650 Top-of-Rack 1G/10G Access 10G/40G Uplinks BLACKDIAMOND 8800 End-of-Row/Mid-of-Row 1G/10G Access 10G/40G Uplinks BLACKDIAMOND X8 End-of-Row/Mid-of-Row 10G Access/Aggregation 40G Aggregation/Core Data Center Bridging M-LAG Direct Attach /VEPA XNV OpenFlow* Powered by Single ExtremeXOS * Future availability. 26
Summit X670 Top-of-Rack Summit X670V 48x10G + 4x40G OR 64x10G Investment Protection through VIM Summit X670 48x10G with lower latency 10G at 1G Price Points with Full Features Low Latency, Phy-less Design, Cutthrough Switching DCB and Storage Convergence Supports 128K Virtual Machines Physical Security with Motion Detector * 27 * Future availability.
BlackDiamond X8 News Summary Real Cloud-Scale Switching for the Data Centre Highest Consolidation 14.5 RU - 1/3 rd of Rack 768 x 10GbE wirespeed 192 x 40GbE wirespeed Lowest Latency 2.3 ms Port-to-Port Server Virtualization 128K Virtual Machines VM Lifecycle Management VPP & XNV Unmatched Performance 20+ Tbps Capacity/Switch 1.28 Tbps Bandwidth/Slot Storage Convergence iscsi, NFS, CIFS DCBx (PFC, FS, ETS) FCoE Transit Superior Availability 1+1 Management N+1 Fabric, Power & Fan N+N Power GRID Power & Cooling Front-to-Back Cooling Variable Fan Speed Only 5W per 10GbE port Intelligent Power Mgmt. * Dell Oro Ethernet Switching Market - Fall 2011
Best in Class Cloud Performance and Power Best In Class Latency The Lippis Report: Best In Class Power Consumption Ensures Efficiency Recognized Independent Analytical Product Testing 29
40G Oversub 10G Wirespeed 40G 10G Vendor Comparison By Chassis BlackDiamond X8 is a Leader Not a Follower! HP 12508 64 0 256 0 Juniper EX8208 64 0 320 0 Brocade MLXE8 64 0 N/A 0 Extreme Networks BlackDiamond 8800 64 16 192 48 Dell/F10 E600i Exa 70 14 280 28 Cisco NX7010 256 48 384 N/A Arista 7508 384 All 8 Slot Chassis apples-to-apples comparison All 10/40GE Capable 0 N/A 0 Juniper QFX3008 0 128 N/A N/A Extreme Networks BlackDiamond X8 768 192 N/A N/A 30
Vendor Comparison By 10 GE Rack DC Space for WireSpeed 2304x10GE Ports Extreme Networks BlackDiamond X8 1 Rack (44RU) 140 120 100 80 60 40 20 0 Total Power (KW) Extreme Networks Arista Cisco Brocade Juniper Dell/F10 HP Arista 7508 1.5 Racks Cisco NX7010 4.5 Racks Brocade MLXE-8 6 Racks Juniper EX8208 12 Racks Dell/F10 E600i 16.5 Racks HP 12508 18 Racks Data Center Space for Wire-Speed 2304x10GE Ports 31
First to Market With 40G Connectivity Wire-Speed 576x40GE Ports Extreme BlackDiamond X8 1 Rack (44RU) 250 200 150 100 50 0 Total Power (KW) Extreme BDX Cisco N7K Juniper QFX3K Cisco 6K Dell E600i 32 Juniper QFX3008 2.5 Racks Cisco NX7010 6 Racks Numbers of Racks Required to Deliver Nearly 600 40G Data Trunk Connections Cisco Cat6509V 18 Racks Dell/F10 E600i 20.5 Racks
Partnerships Enable Best-of-Breed Solutions Open Enterprise Cloud Open Source Cloud Architecture Virtualization Server & Storage Ecosystem Partners 33
Ethernet Data Centre Solution Eco-System Systems Applications Open Flow Management & Orchestration Storage & Security Virtualization 34
Why Extreme? Thought Leadership & Innovation Standards-Based, Open & Interoperable Simplified Portfolio Across Data Centre Segments Open Eco-System It is Green 35
Thank You 36
What Analysts Are Saying Infotech, a research firm in Canada, rated Extreme Networks a data center champion and exemplary November 2011 report Independent tests by Lippis Report called our core switch (BDX8) 3X-10X faster than competition November 2011 report Dell Oro, a leading analyst group, rated Extreme Networks a top 5 data center vendor at core & edge November 2011 report Gartner report on Data Center rated Extreme Networks a data network specialist, joining JNPR & BRCD November 2011 report Goldman Sachs CIO survey of Cisco customers, called Extreme a top 5 network vendor September 2011 report 3 #1 and #2 in new high-growth 40G market December2011 report 37 Extreme Networks Confidential and Proprietary. Not to be distributed outside of Extreme Networks, Inc.
In Focus: Info-Tech Report Champions receive high scores for most evaluation criteria and offer excellent value. They have a strong market presence and are usually the trend setters for the industry. Innovators have demonstrated innovative product strengths that act as their competitive advantage in appealing to niche segments of the market. Market Pillars are established players with very strong vendor credentials, but with more average product scores. Emerging Players are newer vendors who are starting to gain a foothold in the marketplace. They balance product and vendor attributes, though score lower relative to market Champions. 38 Extreme Networks Confidential and Proprietary. Not to be distributed outside of Extreme Networks, Inc. 5