Fabrics SDN and Big Data. What this means to your white space



Similar documents
The Need for Low-Loss Multifiber Connectivity

Considerations for choosing top-of-rack in today's fat-tree switch fabric configurations

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Data Center Cabling Design Trends

How To Make A Data Center More Efficient

Data Center Architecture with Panduit, Intel, and Cisco

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

Comparing Copper and Fiber Options in the Data Center

Zone Distribution in the Data Center

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

Dat Da a t Cen t Cen er t St andar St d andar s d R o R undup o BICSI, TIA, CENELEC CENELE, ISO Steve Kepekci, RCDD

40GBASE-T Advantages and Use Cases

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

SummitStack in the Data Center

A Comparison of Total Costs of Ownership of 10 Gigabit Network Deployments in the Data Center

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

ANSI/TIA-942 Telecommunications Infrastructure Standards for Data Centers

Introduction to Data Center

Data Centre Cabling. 20 August 2015 Stefan Naude RCDD The SIEMON Company

Juniper Networks QFabric: Scaling for the Modern Data Center

Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches

Living Infrastructure

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

Planning, Designing and Installing a Pre-Terminated Copper and Optical Cabling System for the Data Center

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

Virtualizing the SAN with Software Defined Storage Networks

This 5 days training Course focuses on Best Practice Data Centre Design, Operation and Management leading to BICSI credits.

Data Center Network Topologies

Arista and Leviton Technology in the Data Center

Best Practices for Data Center Infrastructure Design

Obsolete Fiber Technology? Not in my Data Center!

Unified Physical Infrastructure (UPI) Strategies for Data Center Networking

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation

ANSI/TIA-942 Telecommunications Infrastructure Standard for Data Centers

Intelligent Data Centers: Delivering Business Value. May 15, 2014 Panduit Korea Technical System Engineer Chester Ki

Data Center Design for 40/100G

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity

SummitStack in the Data Center

Table of Contents. Fiber Trunking 2. Copper Trunking 5. H-Series Enclosures 6. H-Series Mods/Adapter Panels 7. RSD Enclosures 8

Corning Cable Systems Optical Cabling Solutions for Brocade

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Trends and Standards In Cabling Systems

Optimizing Infrastructure Support For Storage Area Networks

Data Center Network Evolution: Increase the Value of IT in Your Organization

Cisco SFS 7000D Series InfiniBand Server Switches

Specialty Environment Design Mission Critical Facilities

What is a Datacenter?

Building Tomorrow s Data Center Network Today

Data Center Topology Guide

A 10 GbE Network is the Backbone of the Virtual Data Center

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Blade Switches Don t Cut It in a 10 Gig Data Center

Using a Fabric Extender with a Cisco Nexus 5000 Series Switch

HIGHSPEED ETHERNET THE NEED FOR SPEED. Jim Duran, Product Manager - Americas WHITE PAPER. Molex Premise Networks

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

TIA-942 Data Centre Standards Overview WHITE PAPER

Data Center Standards Making Progress on Many Fronts. Jonathan Jew J&M Consultants, Inc

SX1012: High Performance Small Scale Top-of-Rack Switch

Simplifying Data Center Network Architecture: Collapsing the Tiers

SIEMON Data Center Solutions.

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Flexible Modular Data Center Architecture Simplifies Operations

InstaPATCH Cu Cabling Solutions for Data Centers

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

Building a data center. C R Srinivasan Tata Communications

The ABC of Direct Attach Cables

How To Power And Cool A Data Center

Advantages of 4 th Generation CAT 6A UTP Cables. Antonio Cartagena Business Development - Caribbean Leviton Network Solutions

Physical Infrastructure trends and evolving certification requirements for Datacenters Ravi Doddavaram

Data Center Networking Designing Today s Data Center

Automating Infrastructure A connectivity perspective for BICSI SEA meeting, November 2011

The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Network Topologies and Distances

Anticipating the Future

E - B O O K : D A T A C E N T E R C A B L I N G I N F R A S T R U C T U R E

Managing Data Centre Heat Issues

Brocade One Data Center Cloud-Optimized Networks

Data Centre Design & Implementation: An ROI Approach. Lambros Kostaras Business Manager, South East Europe gr-lk@panduit.com

Energy-efficient & scalable data center infrastructure design

Smart Data Center Solutions

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

ADC s Data Center Optical Distribution Frame: The Data Center s Main Cross-Connect WHITE PAPER

Data Centre Infrastructure Management DCIM. Where it fits with what s going on

Enabling an agile Data Centre in a (Fr)agile market

Higher Speed Cabling : Data Center Infrastructure Beyond 10...

Next Generation Data Centre ICT Infrastructures. Harry Forbes CTO NCS

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

DATA CENTER DESIGN BEST PRACTICES: EFFICIENCIES BEYOND POWER AND COOLING

Transcription:

Fabrics SDN and Big Data. What this means to your white space Carrie Higbie RCDD/NTS, CDCP, CDCS Global Director Data Center Solutions and Services Carrie_Higbie@siemon.com TechTarget Ask the Experts and columnist SearchNetworking, SearchVoIP, SearchDataCenters BOD and Fmr. President BladeSystems Alliance Member Ethernet Alliance, IEEE Columnist Performance Networking, SearchCIO, SearchMobile, ZeroDowntime, ComputerWorld, IT World, Network World, Financial Review 04/23/2014 Twitter: @carriehigbie

Technology Leadership TIA ISO/IEC Ethernet Alliance IEEE ANZ standards committees BICSI US Green Building Council Various World Green Initiatives Uptime Institute ASHRAE AFCOM 7x24Exchange Open Data Center Alliance Etc.

Cloud Security Issues Data dispersal and international privacy laws EU Data Protection Directive and U.S. Safe Harbor program Exposure of data to foreign government and data subpoenas Data retention issues Need for isolation management Multi-tenancy Logging challenges Data ownership issues Quality of service guarantees

Challenges for Cloud Infrastructures and data centers already built Loss of control Shift of employment Potential loss of employment Security can be a big unknown SLA s Provider expertise in the cloud space Portability Knowledge transfer Compliance Location Lack of standards How to bill for services? Public or private? Hybrid?

Driving Private Clouds Portability Confidence as a Service What works in the cloud and what doesn t? Vendor dependency IT threat Security Bankruptcy Geographic Diversity Power users circumvent IT HR involvement? IT may not be involved in those decisions Tangible and intangible ROI calculations Standards Local regional sovereignty requirements Location of data

Risk Assessment is Critical Type of information Technology Vendor lock-in could be a threat Open systems Vendors are trying to close their systems Location of information Sovereignty

Lifecycle of a Data Center Inception Design Build Operations Retire/Retrofit

Inception We need a data center Where? How big? How much power? How much cooling? Coloor not Cloud or not (public, private, hybrid)

What can go wrong at inception? Dreaming too big Ignoring new trends Living too cutting edge with nothing to fall back on Relying too much on consultants previous designs Relying on the wrong design build/consulting firms

Example What s wrong here? PDU s add to cooling load Walkway blocked Open space mixes airflow No access to cabinets blocked by columns

PUE Power Usage Effectiveness Defacto standard Total power / IT load Most data centers are in the 1.9-2.6 range The closer to 1 the better Requires Intelligent PDU s or some means to measure USED power DCIE (Data Center Infrastructure Effectiveness) is 1/PUE to give percentage PUE Version 2 kwhinstead of kw

Green Grid The Green Grid is proposing the use of a new metric Data center compute efficiency (DCcE) and its underlying sub-metric, server compute efficiency (ScE). These metrics will enable data center operators to determine the efficiency of their compute resources, which allows them to identify areas of inefficiency. Using DCcE and ScE, data center operators can continually refine and increase the efficiency of their compute resources in the same way that they use power usage effectiveness (PUE) to improve data center infrastructure. CUE (Carbon Usage Effectiveness) All are relatively new and not (to date) widely adopted

PUE 2

Ethernet, Energy Efficient Ethernet, Data Center Ethernet All things Ethernet! EEE is a game changer, especially for 10GBASE-T Provides a true idle state Significantly lowers power Data Center Ethernet collapses the backbone structure layer 2 instead of layer 3 Idea is to increase speed (bridge not route) and provide priority Ethernet has a LOT of overhead Not intelligent

Detailed Design Original 2D customer floor plan concept Basic 3D BIM-ready model

Phase 2 Customer Case Study Thermal Analysis Overview Hot Aisle/Cold Aisle Configuration Based on customer-proposed type and placement of CRAC units, design create an optimallayout for thermal management, which was then run through thermal analysis software Potential hotspots identified and corrected in the design Theoretical analysis performed on multiple levels of equipment population to ensure ability to manage increased heat load of future equipment additions - Findings: With airflow management practices such as: Vertical chimneys on SAN cabinets Thermal analysis provided to user for communication with cooling vendor to ensure right-sized CRAC system

Thermal Analysis (sample excerpt)

Space Planning and Colocation Facilities

TIA 942, 942-2, 942-A This Standard (942-A) replaces ANSI/TIA-942 dated April 12, 2005 and its addenda. This Standard incorporates and refines the technical content of: ANSI/TIA-942, Addendum 1, Data center coaxial cabling specifications and application 311 distances 312 ANSI/TIA-942, Addendum 2, Additional guidelines for data centers

Expanded Topology New Area

Quick case studies The savings behind a great design Must take into account the entire data center ecosystem Customer needs assessment Knowledge of all DC products Knowledge of standards and regulations (Copyright The Siemon Company)

Traditional 3-Tier 3-Tier switch architectures were common practice Core and SAN switches in the MDA Aggregation switches in the MDA, IDA or HDA Access switches in the HDA Shorter distances usually accommodated more than two connectors without exceeding loss No longer adequate for non-blocking, low-latency communication between virtualized servers Caused server-to-server communication to traverse in a north-south pattern through multiple switches

Prior to Top of Rack

Look what Disappears

With FCoE look what else disappears

Look what s left Min. Versatility

A better design=.. Aggregate/Core BASE-T Switches in HDA Every red line is 24 or 48 channels of copper (primary network) Every blue line is 24 or 48 channels of secondary copper

Top of Rack Switching 3 Tier Copper needs to be added back for management Sacrifice channels and agility in favor of expensive active equipment which requires ongoing opex expenses as opposed to passive components which do not incur opex. Switch purchases are made based on number of cabinets not number of servers requiring a switch connection resulting in waste

With ToR intercabinet connections change to fibre These fiber/fibre links are going to be high speed Good migration for 40/100GbE There will be MAC work to get to higher strand count applications

Typical Layout Limited reach for point to point cables Vendor ID locks cables to specific vendors (new vendor=new cables)

1, 3 and 5m limitation on passive, 10m on active SFP+ In order to fill a 48 port switch, you would need to be able to put 48 servers in the cabinet per network Most data centers don t allow cabinet to other cabinet spaghetti Impossible in most data centers Limited by weight, power and cooling Leads to many unusable ports

Installed port cost data center cabling media 10GBASE-Cu Active 7m 10GBASE-Cu 5m FCoE FC/10GbE INFINIBAND QDR 10GBASE-CX4 10GBASE-LX4 (switch to 90 day warranty from electronics manufacturer Hardware Cable Cost Maintenance 15% of cost YOY 10GBASE-LRM 10GBASE-LR 10GBASE-SR 10GBASE-T (30m) 20 YEAR WARRANTY $- $1 000,00 $2 000,00 $3 000,00 $4 000,00 $5 000,00 $6 000,00 $7 000,00 $8 000,00 $9 000,00 $10 000,00

Costs for switching List Total Used ports Unused 78Nexus 2000 32 port 10G $15,000.00 $1,170,000.00 1092 1404 78Redundant Power Supplies $500.00 $39,000.00 312SFP+ Uplink Ports $1,500.00 $468,000.00 Total for 2000's $1,677,000.00 10Nexus 5000 32 port $23,010.00 $230,100.00 39SFP+ Modules $5,200.00 $202,800.00 10Redundant Power Supplies $500.00 $5,000.00 Total for 5000's $437,900.00 2Cisco 7010 Core Switches $79,000.00 $158,000.00 2Redundant Power Supplies $7,500.00 $15,000.00 4Fiber Card for Uplinks $70,000.00 $280,000.00 $453,000.00 Grand Total $2,567,900.00 Does not include software 3 Years Maintenance $1,155,555.00 3 year total $3,723,455.00

Zoned Approach List Total 34 Nexus 200032 port 10G $15,000.00 $510,000.00 34 Redundant Power Supplies $500.00 $17,000.00 136 SFP+ Uplink Ports $1,500.00 $204,000.00 Total for 2000's $731,000.00Savings $ 946,000.00 5 Nexus 500032 port $23,010.00 $115,050.00 20 SFP+ Modules $5,200.00 $104,000.00 5 Redundant Power Supplies $500.00 $2,500.00 Total for 5000's $221,550.00Savings $ 216,260.00 2 Cisco 7010 Core Switches $79,000.00 $158,000.00 2 Redundant Power Supplies $7,500.00 $15,000.00 2 Fiber Card for Uplinks $70,000.00 $140,000.00 $313,000.00Savings $ 140,000.00 Grand Total $1,265,550.00Savings $ 1,302,260.00 3 Years Maintenance $569,497.50 $ 586,057.50 3 year total $1,835,047.50 $ 1,888,407.50

Amount that could be used for other equipment per cabinet $48,420.71 This is ONLY 39 cabinets! According to our ROI sheet 48 ports per cabinet (24 each network) for 6A F/UTP and 30m channels the cost is $105,399.00 or roughly 2 ½ cabinets worth of savings Total savings to customer AFTER CABLING $1,783,008.50

Qty Description List Total Network Ports Used Unused Nexus 2k's in top of rack 720 Nexus 2000 32 port $ 15,000.00 $10,800,000.00 23040 10080 12960 720 Redundant Power Supplies $ 500.00 $360,000.00 5760 SFP+ Uplink Ports $ 1,500.00 $8,640,000.00 Total for 2000's $19,800,000.00 Cost per port $859.38 $8,662,500.00 $11,137,500.00 Required 5k's for uplinks 192 Nexus 5000 32 port $ 23,010.00 $4,417,920.00 6144 2880 0 768 SFP+ Uplink Modules $ 5,200.00 $3,993,600.00 192 Redundant Power Supplies $ 500.00 $96,000.00 Total for 5000's $8,507,520.00 7k's and line cards 4 Cisco 7010 Core Switches $79,000.00 $316,000.00 4 Redundant Power Supplies $7,500.00 $30,000.00 24 Fiber Card for Uplinks $70,000.00 $1,680,000.00 768 768 0 Total for 7000's $2,026,000.00 Grand Total with top of rack $30,333,520.00 Port Count Total 29952 13728 12960 Zoned Approach - required ports 8400 at 14 servers dual network attached each 150 Nexus 2000 using required ports $ 15,000.00 $2,250,000.00 4800 4200 600 150 Redundant Power Supplies $ 500.00 $75,000.00 1200 SFP+ Uplink Ports $ 1,500.00 $1,800,000.00 Total for 2000's $4,125,000.00 Cost per port $859.38 $3,609,375.00 $515,625.00 5k's required if centralized 19 Nexus 5000 32 port $ 23,010.00 $437,190.00 608 600 8 76 SFP+ Uplink Modules $ 5,200.00 $395,200.00 19 Redundant Power Supplies $ 500.00 $9,500.00 Total for 5000's $841,890.00 1/2 the 7000's required 2 Cisco 7010 Core Switches $79,000.00 $158,000.00 2 Redundant Power Supplies $7,500.00 $15,000.00 3 Fiber Card for Uplinks $70,000.00 $210,000.00 96 76 8 Total for 7000's $383,000.00 Grand Total zoned $5,349,890.00 Port Count Total 5504 4876 616 Total port savings 24448 8852 12344 $25 million savings, unused ports drop from 12,960 to 616

Enter New Switch Fabrics Switch fabrics typically only use one or two tiers of switches Interconnection switches in the MDA or IDA Access switches in the HDA or EDA Aggregation switches are usually eliminated Provides lower latency and greater bandwidth between any two points Eliminates traffic having to travel through multiple switch layers Enables dynamic east-west, server-to-server traffic Ideal for virtualized data centers Fat-Tree is the common switch fabric for today s data center Also referred to as leaf and spine

What this looks like- every server connects to a more centralized switch In the fabric world- also known as leaf and spine 2 Connector channel is basis for 40GBASE-T

Cross Connects in the Data Center Creates a convenient patching area Fiber panels that mirror switch ports connect via fixed links Creates an any to all configuration MACs made at the panels via fiber jumpers Can be located in a separate cabinet Keeps switches untouched and secure in their own cabinet Allows for one-time deployment of fiber from MDA to HDA Simplifies adding new equipment Fiber can be used for multiple purposes (networking or SAN) Swap a server s fiber connection from networking to SAN using a jumper at the cross connect

Another View One-time deployment of fiber from cross connect in MDA to cross connects at End of Rows in HDA Fiber jumper changes at End of Row cross connects allow fiber to be used for various purposes Each cross connect adds connection points and loss

Higher Speeds Have More Stringent Insertion Loss Requirements Ethernet speeds migrating from 1 and 10 Gb/s to 40 and 100 Gb/s Fibre Channel speeds (SANs) migrating from 8 Gb/s to 16 and 32 Gb/s Distance limitations and channel loss decrease as speeds have increase Standards specify a maximum channel loss for each application Fiber Type 1000BASE-SX (1 Gb/s) 10GBASE-SR (10 Gb/s) 40GBASE-SR4 (40 Gb/s) 100GBASE-SR10 (100 Gb/s) Distance Channel Distance Channel Distance Channel (m) Loss (db) (m) Loss (db) (m) Loss (db) OM3 1000 4.5 300 2.6 100 1.9 OM4 1100 4.8 400 2.9 150 1.5 Fiber Type 8 Gb/s Fibre Channel 16 Gb/s Fibre Channel 32 Gb/s Fibre Channel Distance (m) Channel Loss (db) Distance (m) Channel Loss (db) Distance (m) Channel Loss (db) OM3 150 2.0 100 1.86 70 1.87 OM4 190 2.19 125 1.95 100 1.86

Or to put it another way=.. Aggregate/Core BASE-T Switches in HDA If the top of rack switch is removed the fiber in the top of the cabinet becomes a passive cross connect that can be used for SAN The same would hold true if the aggregate end of row switch was removed. This adds additional passive connections in the channel

Sacrificing copper increases fiber and $$== OM4 min recommended in TIA Low loss connectors are required or active connections that turn passive may cause a problem with link loss budgets

Looking at costs Three Tier Leaf/Spine DAC Leaf/Spine 10GBASE-T Low density 14 Servers/Cab Install Cost $ 8,816,885.18 $ 11,786,235.65 $ 8,638,321.02 Average Cost/Server Cab $ 61,228.37 $ 70,156.16 $ 59,988.34 Annual Power cost Networking $ 91,328.26 $ 101,419.78 $ 44,402.69 High Density 40 Servers/Cab Install Cost $ 16,295,362.88 $ 23,208,375.30 $ 18,722,843.39 Average Cost/Server Cab $ 113,162.24 $ 138,145.09 $ 130,019.75 Annual Power cost Networking $ 141,621.87 $ 177,610.75 $ 106,717.82

Fabrics in particular Leaf/Spine DAC Leaf/Spine 10GBASE-T Low density 14 Servers/Cab Total Equip/Cabling Cost $ 11,786,235.65 $ 8,638,321.02 Average Cost/Server Cab $ 70,156.16 $ 59,988.34 Annual Power cost Networking $ 101,419.78 $ 44,402.69 Total Cabling Cost $ 1,222,357.82 $ 70,327.30 High Density 40 Servers/Cab Total Equip/Cabling Cost $ 26,394,022.02 $ 21,596,114.19 Average Cost/Server Cab $ 157,107.27 $ 149,973.02 Annual Power cost Networking $ 177,610.75 $ 106,717.82 Total Cabling Cost $ 5,123,942.02 $ 2,078,260.76

Questions??? @carriehigbie Carrie_higbie@siemon.com