Connecting the Clouds



Similar documents
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Mellanox Accelerated Storage Solutions

Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

SX1012: High Performance Small Scale Top-of-Rack Switch

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

Introduction to Cloud Design Four Design Principals For IaaS

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Building a Scalable Storage with InfiniBand

State of the Art Cloud Infrastructure

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Power Saving Features in Mellanox Products

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Mellanox OpenStack Solution Reference Architecture

Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520

Mellanox Global Professional Services

Mellanox WinOF for Windows 8 Quick Start Guide

SMB Direct for SQL Server and Private Cloud

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability

Mellanox Technologies, Ltd. Announces Record Quarterly Results

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

Replacing SAN with High Performance Windows Share over a Converged Network

Connecting Flash in Cloud Storage

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Mellanox HPC-X Software Toolkit Release Notes

Virtual Compute Appliance Frequently Asked Questions

Doubling the I/O Performance of VMware vsphere 4.1

RoCE vs. iwarp Competitive Analysis

Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions

Storage, Cloud, Web 2.0, Big Data Driving Growth

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Building Enterprise-Class Storage Using 40GbE

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

Mellanox ConnectX -3 Firmware (fw-connectx3) Release Notes

Virtualizing the SAN with Software Defined Storage Networks

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Enabling High performance Big Data platform with RDMA

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

Private cloud computing advances

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

Block based, file-based, combination. Component based, solution based

Mellanox Academy Online Training (E-learning)

Advancing Applications Performance With InfiniBand

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Michael Kagan.

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

HP iscsi storage for small and midsize businesses

Solution Brief July All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux

InfiniBand vs Fibre Channel Throughput. Figure 1. InfiniBand vs 2Gb/s Fibre Channel Single Port Storage Throughput to Disk Media

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

What Is Microsoft Private Cloud Fast Track?

Accelerating Microsoft Exchange Servers with I/O Caching

WHITE PAPER Data center consolidation: Focus on application and business services: Virtualization anywhere: Fabric convergence:

An Oracle White Paper December Consolidating and Virtualizing Datacenter Networks with Oracle s Network Fabric

Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Performance Analysis: Scale-Out File Server Cluster with Windows Server 2012 R2 Date: December 2014 Author: Mike Leone, ESG Lab Analyst

Microsoft Hybrid Cloud IaaS Platforms

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions

InfiniBand in the Enterprise Data Center

Installing Hadoop over Ceph, Using High Performance Networking

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Security in Mellanox Technologies InfiniBand Fabrics Technical Overview

Interoperability Test Results for Juniper Networks EX Series Ethernet Switches and NetApp Storage Systems

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

3G Converged-NICs A Platform for Server I/O to Converged Networks

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation

Cloud-ready network architecture

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

Microsoft Windows Server Hyper-V in a Flash

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters

Brocade Solution for EMC VSPEX Server Virtualization

White Paper Solarflare High-Performance Computing (HPC) Applications

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

FOR SERVERS 2.2: FEATURE matrix

Transcription:

Connecting the Clouds

Mellanox Connected Clouds Mellanox s Ethernet and InfiniBand interconnects enable and enhance worldleading cloud infrastructures around the globe. Utilizing Mellanox s fast server and storage interconnect solutions, these cloud vendors maximized their cloud efficiency and reduced their cost-per-application. InfiniBand is the key ingredient to build the most scalable and cost-effective cloud environments and to achieve the highest return-on-investment. Microsoft Azure s InfiniBand based cloud, as listed on the world s top performance capable systems (TOP500), demonstrated 33% lower application cost compared to other clouds on the same list. Mellanox s Ethernet and RoCE (RDMA over Converged Ethernet) solutions deliver world-leading Ethernet based interconnect density, compute and storage. Mellanox s Virtual Protocol Interconnect (VPI) technology incorporates both InfiniBand and Ethernet into the same solution to provide interconnect flexibility for cloud providers. We encourage you to try the Mellanox VPI-based cloud infrastructures and see for yourself how your application can run faster!

TM Higher Performance 56Gb/s per port with RDMA 2us for VM to VM connectivity 3.5x faster VM migration 6x faster storage access Cost Effective Storage Higher storage density with RDMA Utilization of existing disk bays Higher Infrastructure Efficiency Support more VMs per server Offload hypervisor CPU Unlimited scalability I/O consolidation (one wire)

Overview Atlantic.Net is a global cloud hosting provider that offers high performance support for customers through their platform. By leveraging Mellanox s InfiniBand solutions, Atlantic.Net can now offer customers more robust cloud hosting services through a reliable, adaptable infrastructure, all at a lower cost in comparison to traditional interconnect solutions. Why Atlantic.Net Chose Mellanox Price and Cost Advantage SM Expensive hardware, overhead costs while scaling, as well as administrative costs can be avoided with Mellanox s interconnect technologies and thereby reduce costs 32% per application. Lower Latency and Faster Storage Access: By utilizing the iscsi RDMA Protocol (iser) implemented in KVM servers over a single converged InfiniBand interconnect adapter, iser delivers lower latency and is less complex, resulting in lower costs to the user. Consolidate I/O Transparently Cost Reduction Per Application 60% 50% 30% Competition Atlantic.net Atlantic.net lowered cost per application by 32% vs Ethernet-based competitors. LAN and SAN connectivity for VMs on KVM and Atlantic.Net s management environment is tightly integrated; allowing Atlantic.Net to transparently consolidate LAN, SAN, live migrations and other traffic. The Bottom Line By deploying Mellanox s InfiniBand solution, Atlantic.Net can support high volume and high-performance requirements on-demand and offer a service that scales as customers needs change and grow. Having built a high performance, reliable and redundant storage infrastructure using off-the-shelf commodity hardware, Atlantic.Net was able to avoid purchasing expensive Fibre Channel storage arrays, saving significant capital expenses per storage system. Decreasing easing Cost

60 VM Overview ProfitBricks is a worldwide cloud computing company, providing computing resources and infrastructure to customers. Faced with the choices of interconnect technologies, they chose Mellanox s InfiniBand solutions with the desire for higher internal network speeds and throughput. By combining InfiniBand with their organic software, ProfitBricks offers a premier cloud service with unprecedented performance capabilities. Why ProfitBricks Chose Mellanox 3x Increase in VMs per Physical Server By utilizing InfiniBand, ProfitBricks reached a dual QDR 40Gb/s InfiniBand threshold for internal back-end network connections. ProfitBricks servers can support over 60 VMs per physical server (at 0.5 Gb per VM rate), which is a 3X increase over a 10GbE based VM deployment (20 VM maximum). 40 VM 20 VM 0 VM 3x Increased VM Deployment InfiniBand Dual QDR 40Gb/s = 60 VM Per Server Consolidation of Network and Storage I/O LAN and storage connectivity for VMs on KVM and the ProfitBricks management tool environment is already integrated with Mellanox s multiple vnic and vhba interfaces. Accelerated Storage Access By utilizing SCSI RDMA Protocol (SRP) implemented in a Red Hat KVM server over a single converged InfiniBand interconnect adapter, provided lower latency and is less complex than FC/iSCSI. The Bottom Line 10GbE 20VM Max 3x Increased VM Deployment By deploying Mellanox s end-to-end InfiniBand solution, ProfitBricks can support high-volume and high- performance requirements on-demand. Its customers are also able to take advantage of a low pricing structure that supplements their performance. Increasing Performance

Overview Sakura Internet is a Japanese Infrastructure as a Service (IaaS) cloud provider that offers computing resources to enterprise customers and IT personnel. Sakura hosts a large number of customers who use the cloud services for social gaming. Many of them also utilize Sakura as a Platform as a Service (PaaS_ for their gaming development platform. Sakura Internet chose Mellanox because of the inherent flexibility and scalability in supporting a growing highly-intensive, gaming computing resource, at a much higher access speed. Why Sakura Internet Chose Mellanox Best cost-to-performance ratio Compared to other Ethernet cards, using a single 56Gb/s adapter, with hardware partitioning/isolation instead of a few 10Gb/s Ethernet cards provides much higher performance while reducing total cost of ownership. Scalability Social game service providers constantly increase or decrease their computing resources on an hourly basis. A flat network was needed that could simplify the cloud management at scale while ensuring extra compute resources and I/O. The Bottom Line Sakura Internet s customers content needs are growing and becoming more complicated. In addition the devices which customers use to access to Sakura Internet s data center has changed to smart phones and tablets, which use the latest communication technologies such as LTE, fueling further access and content demands. By deploying Mellanox s end-to -end InfiniBand solution, Sakura Internet was able to support a scalable interconnect solution to provide the necessary throughput and communication among its servers and storage.

www.mellanox.com USA Mellanox Technologies, Inc. 350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085 Tel: +1 (408) 970-3400 Copyright 2013. Mellanox Technologies. All rights reserved. Mellanox, Mellanox logo, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, FabricIT, MetroX, MLNX-OS, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 4133BR Rev 0.1