High Performance Cloud Connect and DCI Solution at Optimum Cost Chandra Shekhar Pandey VP PLM Platform Solutions BTI Systems San Jose, CA USA February 2012 1
Cloud Connect /DCI Solution Cloud Connect/DCI Key Requirements Cloud Connect / Data Center Interconnect impact on applications performances? Current approach to solve the Cloud Connect/ DCI issues? What is the right Solution for Cloud / Data Center Interconnect? Summary Q&A San Jose, CA USA February 2012 2
Cloud Connect/DCI Requirement Low Latency Interconnect for High Performance Computing 10GE/40GE to enable virtual m/c, SAN, NAS across DC Business Continuity and Disaster Recovery DC Expansion/Consolidation for better applications performances Database, email, Video, Real-time-applications Load Balancing across DC Serving the customers at optimum cost and delivering better performance Better utilize the resources (Load sharing across DC) Power & Cooling efficiency Efficient Power and space utilization Virtualization High level of utilization by sharing the resources Virtualization pointing away from Layer 3 IP VPNs to L2VPNs to VPLS to Pure VLAN Extension Data Storage and replication Scheduled or synchronous using low latency and high bandwidth DC interconnect Global Collaborative services
Driving Forces behind DC/ Clouds Lower cost will drive the higher level of virtualization (HPC, SAN/NAS and Network) Higher resource utilization Lower power and cooling cost Clouds migration for cost efficiency Small company will migrate to cloud for cost efficiency Fortune2000 with build private cloud and leverage public could for excess workload Virtualization, Virtualization, Virtualization Buy Big servers and Big storage to achieve higher performance plus power efficiency due to better utilization Security, Web and Networking gears virtualization BYOD finally will drive the VDI to enable switch between personal and corporate world GREEN, GREEN, GREEN Cloud/DC HW spend growth ($94B in 2011 to $147B in 2016) San Jose, CA USA February 2012 4
Cloud Connect /DCI Solution Cloud Connect/DCI Key Requirements Cloud Connect / Data Center Interconnect impact on applications performances? Current approach to solve the Cloud Connect/ DCI issues? What is the right Solution for Cloud / Data Center Interconnect? Summary Q&A San Jose, CA USA February 2012 5
DCI Impact on Applications Ultra-Low Latency across DC for SAN/NAS/HPC = Higher Performance for all the applications Lower Total Cost of Ownership Less Number of servers/reduced DAS needed in each DC- Low CAPEX and lower OPEX (Power/Cooling) and less Network elements to manage Facilitates Vmotion beyond Data Centre (long distance live migration) <= 5ms latency requirement > 10GigE Move critical applications out of Disaster Way (Disaster Recovery) Load Balancing across Data Center and better utilization of resources Shared SAN across Data Centers Improves BB Credits limit i.e. better utilization of FC channel bandwidth Long distance Vstorage move Distributed Storage across data center such as HDFS style applications
Inefficient DCI/Cloud Connects side Effects Poor applications performance More HPC and Storage need to be added to each DC Poor Utilization of resources across the data center due to lack of real-time clustering and load sharing Redundant resources across the Data Center to address the peak loads scenarios in each DC, which can be avoided using High Performance DCI since it enables resources across DC to be shared as if they are next to each other. CPU spends significant cycles waiting for IO transaction to complete instead of processing the applications/transactions
Cloud Connect /DCI Solution Cloud Connect/DCI Key Requirements Cloud Connect / Data Center Interconnect impact on applications performances? Current approach to solve the Cloud Connect/ DCI issues? What is the right Solution for Cloud / Data Center Interconnect? Summary Q&A San Jose, CA USA February 2012 8
Current approach Add multiples ports on Routers and add additional wavelength across the DC if team was able to isolate the issues is related to DCI Most of the time its hard to isolate the issue to DCI and first point of attack is let s add additional servers, memory, storage etc. DC Interconnect without ultra low latency does not improves the performance much Servers/VMs still spend significant amount of cycles 50% waiting for IO transactions to complete or retry after time-out DC Interconnect with ultra low latency not only improves the performance of the IO transaction significantly but also improve the overall transaction rate significantly so less numbers of VM resources needed reducing both CAPEX and OPEX San Jose, CA USA 9
Current Approach Router, Muxponders, Transponders, ROADM Site A Optical Transport Router Centralized NMS Site B Router Optical Transport ROADM RING Optical Transport Router Site C GE/FC-x Port 10GE Router Port GE/FC Muxponders GigE/10GE Transponders xd ROADM
Cloud Connect /DCI Solution Cloud Connect/DCI Key Requirements Cloud Connect / Data Center Interconnect impact on applications performances? Current approach to solve the Cloud Connect/ DCI issues? What is the right Solution for Cloud / Data Center Interconnect? Summary Q&A San Jose, CA USA February 2012 11
Right Solution Ultra Low Latency across the DC i.e. no significant latency addition beyond the fiber delay Better performance achieved from all the DC infrastructure low CAPEX for HPC/SAN and lower OPEX power due to load sharing and better utilization less number of servers, LAN ports needed Packet Optical solutions to minimize the numbers of wavelength and routers ports Significant Saving San Jose, CA USA February 2012 12
Cloud Connect/DC Interconnect
Integrated Packet Optical DCI Packet Engine + ROADM Site A Intelligent Packet Optical Platform Centralized NMS Site B ROADM RING Site C 10/40/100GE FC-x, OC-x, OTN-x SD/HD-SDI Ports Packet Engine xd ROADM Packet Engine + ROADM or Transponder P-t-P
Intelligent Packet Optical based DCI Minimize the Services Edge Routers ports i.e. use where needed (Use those for Gateway Function not for DCI) Multipoint to multipoint connectivity Statistical multiplexing No STP i.e. all links are active! Sub-50ms protection (Integrated OAM) True Data Center Extension Extend LAN Extend SAN Extend Compute Provide a Highly flexible and Highly available Data Center Live Migration (Possible due to ultra-low latency across DCI) Non-Disruptive Continuous Data Availability
Statistical Multiplexing gain 74xGE 15x10GE 74xGE 15x10GE GE Muxponders 10GE Transponders xd ROADM 4:1 statistical multiplexing 75% reduction in WDM optics 25% reduction in overall cost Packet Engine Packet Engine xd ROADM 224 Gbps 23 wavelengths 56 Gbps 6 wavelengths (assumes 4:1 stat mux gain) 16
Intelligent Packet Optical Transport DCI (Redundancy + HA) Sub-50ms protection Unprotected Client cross-card LAG Line 1:1 optical protection Active/standby or active/active Integrated OAM (Y.1731) End-end service visibility Service performance (availability, latency, jitter, packet loss) Troubleshooting tools (loopback, link trace)
Cloud Connect Solution
vmotion across DCI/Cloud
LAN Extension across Data Center
Low Latency Native SAN Extension
Cloud Connect/DCI Solutions Channels 7 & 8 Channels 17, 18 & 19 Delivering 800G per system. 10G MXP for additional GE. Channels 5 & 6 Virtualized/Stacked 3x 7200 shelves.
Operational Simplicity for DCI solution 1 - Select Ports 2 Confirm Ports 3 Progress indication 4 Status
Cloud Connect /DCI Solution Cloud Connect/DCI Key Requirements Cloud Connect / Data Center Interconnect impact on applications performances? Current approach to solve the Cloud Connect/ DCI issues? What is the right Solution for Cloud / Data Center Interconnect? Summary Q&A San Jose, CA USA February 2012 24
Intelligent Packet Optical Platform based DCI solution 25%-50% CAPEX/OPEX saving due to higher utilization of HPC/SAN and lower power dissipation and better operational efficiency Simplified operation Bulk provisioning & monitoring (Low OPEX) Scale as you grow 10G to 800G with one system at optimum cost using Virtual Chassis/stacking innovation Standard Interfaces (GigE/10GE, HD/SD-SDI, FC-x/FICON, OC-x, STMx) (Single Platform for multiple protocols simplified operation and limited FRU) Ultra Low Latency Summary Right DCI Solution Industry-leading latency in nano-seconds (LAN/SAN extension to share the resources across the data center for load sharing) Compact low power, purpose build for DCI (GREEN) San Jose, CA USA February 2012 25
THANK YOU