Solving I/O Bottlenecks to Enable Superior Cloud Efficiency



Similar documents
Connecting the Clouds

Building a Scalable Storage with InfiniBand

Introduction to Cloud Design Four Design Principals For IaaS

Mellanox Accelerated Storage Solutions

State of the Art Cloud Infrastructure

Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

Mellanox OpenStack Solution Reference Architecture

SX1012: High Performance Small Scale Top-of-Rack Switch

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

RoCE vs. iwarp Competitive Analysis

Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

SMB Direct for SQL Server and Private Cloud

Power Saving Features in Mellanox Products

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Mellanox Global Professional Services

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Enabling High performance Big Data platform with RDMA

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Mellanox WinOF for Windows 8 Quick Start Guide

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

Replacing SAN with High Performance Windows Share over a Converged Network

Block based, file-based, combination. Component based, solution based

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

WHITE PAPER Data center consolidation: Focus on application and business services: Virtualization anywhere: Fabric convergence:

A Platform Built for Server Virtualization: Cisco Unified Computing System

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Optimize Server Virtualization with QLogic s 10GbE Secure SR-IOV

ECLIPSE Performance Benchmarks and Profiling. January 2009

Pluribus Netvisor Solution Brief

Overview: X5 Generation Database Machines

LS DYNA Performance Benchmarks and Profiling. January 2009

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Zadara Storage Cloud A

Security in Mellanox Technologies InfiniBand Fabrics Technical Overview

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Software-Defined Networks Powered by VellOS

Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers

Microsoft Windows Server Hyper-V in a Flash

3G Converged-NICs A Platform for Server I/O to Converged Networks

Virtual SAN Design and Deployment Guide

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Advancing Applications Performance With InfiniBand

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Data Center Convergence. Ahmad Zamer, Brocade

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions

Unified Computing Systems

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

New Features in SANsymphony -V10 Storage Virtualization Software

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Private cloud computing advances

Doubling the I/O Performance of VMware vsphere 4.1

Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect

Technology Insight Series

Mellanox Technologies, Ltd. Announces Record Quarterly Results

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Building Enterprise-Class Storage Using 40GbE

Performance Beyond PCI Express: Moving Storage to The Memory Bus A Technical Whitepaper

Brocade One Data Center Cloud-Optimized Networks

InfiniBand in the Enterprise Data Center

White Paper Solarflare High-Performance Computing (HPC) Applications

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Michael Kagan.

Enabling Database-as-a-Service (DBaaS) within Enterprises or Cloud Offerings

An Oracle White Paper December Consolidating and Virtualizing Datacenter Networks with Oracle s Network Fabric

The Next Phase of Datacenter Network Resource Management and Automation March 2011

Cloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper.

Installing Hadoop over Ceph, Using High Performance Networking

Connecting Flash in Cloud Storage

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Understanding Data Locality in VMware Virtual SAN

Microsoft Hybrid Cloud IaaS Platforms

Transcription:

WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one CPU chip, hardware-based CPU virtualization, servers with hundreds of gigabytes of memory and Numa architectures with endless memory bandwidth (hundred GB/s memory traffic with standard server), and even the disks are now much faster with SSD technology. So it seems like we can now efficiently consolidate our applications to much fewer physical servers, or are we missing anything? With all those new capabilities, why do users experience slow application response time? Why isn t the application performance predictable? And can we guarantee real isolation between virtual machines (VMs)? The answer is simple, the current bottleneck is in the I/O and the network. If we run 10 VMs on a server, they generate 10 times or more I/O traffic. Virtualized infrastructure adds even more traffic over the network to satisfy VM migration, access to remote shared storage, high-availability, etc. When multiple VMs share the same network adapter, or when different traffic types (like VM networking, storage, migration, fault tolerance) run over the same wire, one can easily interfere with the other. An important message can wait behind far less critical burst of data, or one user can run a bandwidth consuming benchmark that will deny service from other users.

page 2 To eliminate the I/O problem we need to address the following: 1. Provide faster network and storage I/O with lower CPU/Hypervisor overhead 2. Deliver flatter/lower-latency interconnects with more node to node bandwidth (and less blocking) 3. Guarantee isolation between conflicting traffic types and virtual machines 4. Optimize power, cost, and operation efficiency at scale A critical element in the solution is to integrate the I/O virtualization and provisioning with the overall cloud management and hypervisor framework, so it becomes seamless to the end user. Mellanox products and solutions are uniquely designed to address the Virtualized infrastructure challenges, delivering best in class and highest performance server and storage connectivity to various demanding markets and applications. Providing true hardware-based I/O isolation and network convergence, with unmatched scalability and efficiency. Mellanox solutions are designed to simplify deployment and maintenance through automated monitoring and provisioning and seamless integration with the major cloud frameworks. The following document covers Mellanox I/O virtualization solution and its benefits. Mellanox I/O Virtualization Features and Benefits Highest I/O Performance Mellanox provides the ConnectX family of I/O adapters. These are the industry s fastest, and support dual port 10/40 Gigabit Ethernet and/or 40/56 Gb/s InfiniBand. Using Mellanox ConnectX, we can drive far more bandwidth out of each node, while offloading, acceleration, and RDMA features can greatly reduce the CPU overhead, leading to better performance and higher efficiency. Figure 1. VM network performance using Mellanox ConnectX vs. alternative Figure 1 demonstrates how using the connect adapter with 40GbE interface a user can deliver much faster I/O traffic than using multiple 10GbE ports from competitors. Some demanding applications such as databases, low-latency messaging and data intensive workloads may require bare metal performance and would need to bypass the hypervisor completely when issuing I/O. Mellanox ConnectX supports multiple physical functions on the same adapter and SR-IOV (Single Root I/O Virtualization), allowing direct mapping of VMs to I/O adapter resources, bare metal performance and zero hypervisor CPU overhead. Note that some hypervisors may block live VM migration capability

page 3 when direct mapping is used (cold migration is still supported), which is probably a valid tradeoff for performance demanding applications. Figure 2. Message latency with SR-IOV enabled compared to traditional NIC solution. SR-IOV enables the same performance across virtual and virtual infrastructure Hardware Based I/O Isolation Providing very fast I/O access can also be dangerous. VMs can generate loads of traffic, consuming the entire network, creating up stream or resource congestion problems, and denying service from other more mission critical applications. In addition, the hypervisor must also have a guaranteed chunk of traffic for its own management, storage, and VM migration traffic. If a certain amount of I/O traffic cannot be guaranteed for the hypervisor use, the system can malfunction or even fail. That is why many users don t take any chances and just install multiple adapters for different traffic classes, however installing multiple adapters drive much higher costs (require multiple adapters, switches, cables, etc.) and complexity. Mellanox ConnectX adapters and Mellanox switches provide high degree of traffic isolation in hardware, allowing true fabric convergence without compromising service quality and without taking additional CPU cycles for the I/O processing. Mellanox solutions provide end-to-end traffic and congestion isolation for fabric partitions, and granular control of allocated fabric resources. Figure 3. Mellanox ConnectX providing hardware enforced I/O Virtualization, isolation, and Quality of Service (QoS)

page 4 Every ConnectX adapter can provide thousands of I/O channels (Queues) and more than a hundred virtual PCI (SR-IOV) devices, which can be assigned dynamically to form virtual NICs and virtual storage HBAs. The channels and virtualized I/O can be controlled by an advanced multi stage scheduler, controlling the bandwidth and priority per virtual NIC/HBA or a group of virtual I/O adapters. Thereby ensuring traffic streams are isolated, and that traffic is allocated and prioritized according to application and business needs. Accelerating Storage Access In addition to providing better network performance, ConnectX s RDMA capabilities can be used to accelerate hypervisor traffic such as storage access, VM migration, data and VM replication. The use of RDMA pushes the task of moving data from node-to-node to the ConnectX hardware, yielding much faster performance, lower latency/access-time, and lower CPU overhead. Figure 4. Example using RDMA based storage access vs. traditional I/O Figure 4 demonstrates the storage throughput achieved when using RDMA-based iscsi (iser) compared to traditional TCP/IP based iscsi, and how RDMA can provide 10X faster performance. When deploying a rack of servers, each hosting a couple of dozen virtual machines, the result is 1,000 VMs generating independent I/O transactions to storage. Because of the high density and random I/O patterns involved, most traditional storage systems cannot cope with the load and produce extremely slow performance and response time. Leading to underutilized CPU resources and degraded application performance. Mellanox s award winning Storage Accelerator (VSA) software or hardware appliance (installed on a standard server) is designed to deliver parallel storage access across multiple 40/56Gb Ethernet or InfiniBand fabric to interconnect, and eliminate the storage bottleneck. It provides the following key features: Use internal fast SSD or Flash for boot, temporary data, or storage cache Significantly increases VM performance and the VMs per server consolidation ratio Delivers easy-to-use FC I/O port virtualization, enabling fabric convergence Provide simple management and monitoring of all storage traffic

page 5 Figure 5. Mellanox VSA Based Storage Acceleration Solution Automated Fabric and I/O Provisioning The Fabric is a key element in any cloud solution; Virtual Machines can be assigned to multiple tenants and share the same fabric, but each VM may want to have its own isolated domain and private networks. Each VM may also require a certain amount of allocated network resources (bandwidth, priority, VLANs, etc.). Since VMs are deployed dynamically and can migrate from one server to another, it is key to have a dynamic network virtualization and resource management solution, which will integrate with the overall cloud management framework. Mellanox Unified Fabric Manager (UFM ) provides service oriented network provisioning, virtualization, and monitoring. It utilizes industry standard Quantum REST API (part of OpenStack) for network and I/O provisioning in virtualized environments, and it is integrated with a variety of cloud frameworks. This allows seamless operation of the cloud while ensuring network isolation, security, and SLA management. Figure 6. Cloud solution with seamless integration of server, storage, and network provisioning

page 6 Summary In today s virtualized data center, I/O is the key bottleneck leading to degraded application performance and poor service levels. Furthermore, infrastructure consolidation and a cloud model mandate that I/O and network resources be partitioned, secured, and automated. Mellanox products and solutions enable high-performance and an efficient cloud infrastructure. With Mellanox, users do not need to compromise their performance, application service level, security, or usability in virtualized environments. Mellanox provides the most cost effective cloud infrastructure. Our solutions deliver the following features: Fastest I/O adapters with 10, 40, and 56 Gb/s per port and sub 1us latency Low-latency and high-throughput VM-to-VM performance with full OS bypass and RDMA Accelerated storage access with up to 6GB/s throughput per VM Hardware-based I/O Virtualization and network isolation I/O consolidation of LAN, IPC, and storage over a single wire Cost-effective, high-density switches and fabric architecture End-to-end I/O and network provisioning, with native integration into key cloud frameworks Mellanox solutions address the critical elements in the cloud; bringing performance, isolation, operational efficiency, and simplicity to the new enterprise data center, while dramatically reducing the total solution cost. Please visit us at www.mellanox.com to learn more about Mellanox products and solutions. 350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085 Tel: 408-970-3400 Fax: 408-970-3403 www.mellanox.com Boston Server & Storage Solutions Kapellenstrasse 11 Feldkirchen Germany 85622 +49 89 9090199-3 sales@boston-it.de www.boston-itsolutions.de Copyright 2012. Mellanox Technologies. All rights reserved. Mellanox, Mellanox logo, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. FabricIT, MLNX-OS, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 3922WP Rev 1.0