1 WHITE PAPER May 2014 Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions Contents Executive Summary...2 Background...2 Network Configuration...3 Test Environment...3 The Ceph Cluster Installation TB Raw storage configuration PB Raw storage configuration...7 Conclusion...7 Appendix A...8 Appendix B...9 Appendix C...12 Ceph Server Hardware Configuration...12 Ceph Nodes Top of Rack (ToR) Switching Solution...12 Ceph Client Switching Solution...12
2 page 2 Executive Summary As data continues to grow exponentially storing today s data volumes in an efficient way is a challenge. Many traditional storage solutions neither scale-out nor make it feasible from Capex and Opex perspective, to deploy Peta-Byte or Exa-Byte data stores. A novel approach is required to manage presentday data volumes and provide users with reasonable access time at a manageable cost. This paper summarizes the installation and performance benchmarks of a Ceph storage solution. Ceph is a massively scalable, open source, software-defined storage solution, which uniquely provides object, block and file system services with a single, unified Ceph Storage Cluster. The testing emphasizes the careful network architecture design necessary to handle users data throughput and transaction requirements. Benchmarks show that a single user can generate read throughput requirements to saturate a 10Gbps Ethernet network, while the write performance is largely determined by the cluster s media (Hard Drives and Solid State Drives) capabilities. For even a modestly sized Ceph deployment, the usage of a 40Gbps Ethernet network as the cluster network ( backend ) is imperative to maintain a performing cluster. The reference architecture and cost of solution is based on online reseller, single unit offering, on the date this paper is published, and should be adjusted to actual deployment, timing and customer equipment sourcing preference. Background Software defined storage solutions are an emerging practice to store and archive large volumes of data. Contemporary web, cloud and enterprise organizations data grows exponentially, and data growth of Terabytes per day are common practice. Legacy solutions do not suffice to meet these storage needs at a reasonable cost; thus driving customers to find more efficient solutions, such as scale-out software defined storage. One of the leading solutions in the scale-out, software defined storage solutions is Ceph. Ceph uniquely delivers object, block, and file storage in one unified system. Ceph is highly reliable, easy to manage, and open-source. The power of Ceph can transform your company s IT infrastructure and your ability to manage vast amounts of data. Ceph delivers extraordinary scalability thousands of clients accessing petabytes or even Exa-bytes of data. A Ceph Node leverages commodity hardware and intelligent daemons, and a Ceph Storage Cluster accommodates large numbers of nodes, which communicate with each other to replicate and redistribute data dynamically. A Ceph Monitor can also be placed into a cluster of Ceph monitors to oversee the Ceph nodes in the Ceph Storage Cluster, thereby ensuring high availability. Figure 1. Ceph Architecture Ceph Storage Clusters are dynamic like a living organism. Whereas, many storage appliances do not fully utilize the CPU and RAM of a typical commodity server, Ceph does. From heartbeats, to peering, to rebalancing the cluster or recovering from faults, Ceph offloads work from clients (and from a centralized gateway which doesn t exist in the Ceph architecture) and uses the distributed computing power of the Ceph Object Storage Daemons (OSDs) to perform the work.
3 page 3 Network Configuration For a highly scalable fault tolerant storage cluster, the network architecture is as important as the nodes running Ceph Monitors and the Ceph OSD Daemons. The major requirements for Ceph Storage Clusters are high scalability and high availability. So networks obviously must have the capacity to handle the expected number of clients and per-client bandwidth. The networks must also handle Ceph OSD heartbeat, data replication, cluster rebalancing and recovery traffic. In normal operation, a single write to the primary Ceph OSD Daemon results in additional writes to secondary daemons, based on the replication factor set. So, cluster (back-side) traffic significantly exceeds public (front-side) traffic under normal operating conditions. The public network enables Ceph Client to read data from and write data to Ceph OSD Daemons as well as sending OSDs heartbeats; and, the cluster network enables each Ceph OSD Daemon to check the heartbeat of other Ceph OSD Daemons, send status reports to monitors, replicate objects, rebalance the cluster and backfill and recover when system components fail. When considering the additional loads and tasks on the cluster network, it is reasonable to suggest that the fabric interconnecting the Ceph OSD Daemons should have at least 2X-4X the capacity of the fabric on the public network. Figure 2. Ceph Network Architecture Test Environment We created a test environment to measure capabilities of Ceph Block Storage solution over 10Gbps and 40Gbps. Testing goal is to maximize data Ingestion and extraction from a Ceph Block Storage solution. The environment contains the following equipment: Ceph Installation: Ceph (Emperor) Ceph-deploy Each node configured with 5 OSDs (HDDs), 1 Journal (PCIe SSD) 3 Monitors Hardware: (5) Ceph OSD nodes CPU: 2x Intel E DRAM: 64GB Media: 1x 256GB SSD, Boot drive 5x 600GB, 10K RPM Hard drives 1x800GB, PCIe Gen2x4 SSD Card Figure 3. Ceph Test Environment Network: Mellanox ConnectX -3 Dual Port 10G/40Gb Ethernet NICs, MCX314A-BCBT
4 page 4 (1) Admin Node: CPU: 1x AMD Opteron 4180 DRAM: 32GB Media: 2x 500GB HDD Network: Mellanox ConnectX -3 Dual Port 10G/40Gb Ethernet NICs, MCX314A-BCBT (2) Client Nodes: CPU: 2x Intel E DRAM: 64GB Media: 1x 256GB SSD, Boot drive 2x1000GB, 7.2K RPM Hard drives Network: Mellanox ConnectX -3 Dual Port 10Gb/40Gb Ethernet NICs, MCX314A-BCBT Switching Solution: 40GbE: Mellanox MSX1036B, SwitchX -2 based 40GbE, 1U, 36 QSFP+ ports 10GbE: Mellanox MSX1016X, SwitchX -2 based 10GbE, 1U, 64 SFP+ ports Cabling: 40GbE: MC , Mellanox passive copper cable, ETH 40GbE, 40Gb/s, QSFP, 3m 10GbE: MC , Mellanox passive copper cable, ETH 10GbE, 10Gb/s, SFP+, 3m The Ceph Cluster Installation The installation followed the instructions on Ceph.com documentation. We defined a public and cluster network setting in the ceph.conf file, 3 monitors quorum setting and replication factor set to the default 2. The testing ceph.conf file can be found in appendix B The network performance is checked after the installation using iperf tool. The following are the commands used to measure network bandwidth: Server Side: iperf s Client Side: iperf c <server host IP> -P16 l64k i3 For the 10GbE network the bandwidth performance range achieved is 9.48Gb/s to 9.78Gb/s For the 40GbE network the bandwidth performance range achieved is 36.82Gb/s to 38.43Gb/s The benchmark testing conducted using fio tool, rev , The testing setting is: fio --directory=/mnt/cephblockstorage --direct=1 --rw=$action --bs=$blocksize --size=30g --numjobs=128 --runtime=60 --group_reporting --name=testfile --output=$outputfile - $Action=read, write, randread, randwrite - $bs=4k,128k,8m The client setting of file system format is ext4. The setting and mounting commands on the client are: #> rbd -c /etc/ceph/ceph.conf -p benchmark create benchmrk --size
5 page 5 #> rbd map benchmrk --pool benchmark --secret /etc/ceph/client.admin #> mkfs.ext4 /dev/rbd1 #> mkdir /mnt/cephblockstorage #> mount /dev/rbd1 /mnt/cephblockstorage The testing is conducted with the cluster network set for its maximum bandwidth of 40Gb/s, and varying the public network performance the graph below shows the performance results achieved for sequential read throughput, complete test results for random read and write as well as sequential read and write performance can be found in appendix A Figure 4. Ceph Single Client, Sequential Throughput and Transaction Capabilities The need for high performance network in the public network is obvious to maintain a performing environment. We suggest a tiered network solution where the Ceph nodes are connected both on the cluster and the public network over a 40GbE. Using Mellanox SX1036B to connect the Ceph nodes, and using Mellanox SX1024 to connect the clients to the Ceph nodes as described in Figure 4. The need for 40GbE in the cluster network comes from data write replication workload, with servers deployed with as few as 15HDDs, the write bandwidth exceeds 10GbE network capability. In our testing with only 5 drives per machines we experienced over 6Gbps of peak network usage. Obviously for systems incorporating flash based for solid state drivers 10GbE is inadequate and a minimum of 40 Gb/s backend bandwidth is required. The proposed architecture can scale from 1TB to over 8PB, using 4RU Servers, with 24 media solutions per server. Since the clients are connected with 10GbE solution, concurrent client traffic will saturate the network capability if the Ceph nodes use the same 10GbE solution. To eliminate the bottleneck we suggest connecting the Ceph nodes to a semi-private network operating at 40GbE, connected to the public network in which clients are using 10GbE. The images below show two examples of this deployment, the first one for 500TB of raw storage and the second one for 8PB of raw storage. Each rack contains up to 10 servers, 4RU each with 72TB of raw storage using 24x 3TB HDDs and 6 SSD drives used as journals. The servers are connected using a dual port 40GbE card, port 1 connected to the semiprivate network while port 2 is connected to the cluster network. In the 8PB deployment we are using the increased throughput capability of Mellanox switches to drive 56Gbps of Ethernet traffic between the Top of Rack switch and the aggregation layer. The increased throughput reduces the number of cables needed to create non-blocking fat-tree architecture.
6 page 6 Figure TB Ceph Storage Design 500TB Raw storage configuration Includes 7 Ceph nodes, each equipped with 24 HDDs, 3TB of raw storage each, 6 SSD drives 480GB each, Dual CPU and total of 20 cores per node, 64GB of DRAM per node, Mellanox ConnectX-3 based NIC Dual port operating at 40GbE. The switching infrastructure contains the following: Public Network: - 1x MSX1036B, based on Mellanox SwitchX-2, 36 ports, 40GbE QSFP - 3x MSX1024B, based on Mellanox SwitchX-2, 48 ports 10GbE, 12 ports 40GbE Cluster Network: - 1x MSX1012B, Based on Mellanox SwitchX-2, 12 ports, 40GbE QSFP This solution can connect over 150 client nodes to the storage nodes, at average client line rate speeds of over 1GB/s. The storage cluster can easily scale to over 800TB without the need to add any additional switching hardware. The cost per tera-byte of this solution is US$410, using a single unit purchase from online retailer at the date of this paper publication. The cost includes all hardware components for the Ceph cluster: Complete servers (CPU, DRAM, HDDs, SSDs, NICs), cluster and public network equipment including cables. The cost does not include the client nodes, any software licensing or professional services fees. Complete hardware list used in this configuration can be found in appendix C. Figure PB Ceph Storage Design
7 page 7 8.5PB Raw storage configuration Conclusion The scalability of Ceph is indicated by this 8.5PB configuration which includes 120 Ceph nodes, each equipped with 24 HDDs, 3TB of raw storage each, 6 SSD drives 480GB each, Dual CPU and total of 20 cores per node, 64GB of DRAM per node, Mellanox ConnectX-3 based Dual port NIC operating at 40GbE. The switching infrastructure consists of the following: Public Network: - 11x MSX1036B, based on Mellanox SwitchX-2, 36 ports, 40GbE QSFP - 10x MSX1024B, based on Mellanox SwitchX-2, 48 ports 10GbE, 12 ports 40GbE Cluster Network: - 9x MSX1012B, Based on Mellanox SwitchX-2, 12 ports, 40GbE QSFP This solution can connect over 500 client nodes to the storage solution, at client line rate speed of over 1GB/s. The cost per tera-byte of this solution is US$350, using a single unit purchase from online retailer at the date of this paper publication. The cost includes all hardware components for the Ceph cluster: Complete servers (CPU, DRAM, HDDs, SSDs, NICs), cluster and public network equipment including cables. The cost doesn t include the client nodes, any software licensing or professional services fees. Complete hardware list used in this configuration can be found in appendix C. Ceph software-defined storage deployed with Mellanox s 10G/40Gb Ethernet solutions enables smooth scalability and performance endurance. Creating fast public network tightly coupling Ceph nodes, equipped with multiple OSDs per node, allow clients to effectively use higher throughput and IOPs. The 40GbE cluster network offloads the replication throughput from the public network and provides with faster consistency and reliability of data.
12 page 12 Appendix C Ceph Server Hardware Configuration Chassis: 4RU CPU: Dual Intel E5-2660v2 Total Server Memory: 64GByte Hard Drives: 24x 3TB SAS, 7.2K RPM Solid State Drives: 6x 480GB SATA Boot Drives: 2x 120GB SSD SATA drives Network card: MCX314A-BCBT, Mellanox ConnectX -3 EN network interface card, 40GigE, dual-port QSFP, PCIe3.0 x8 8GT/s Ceph Nodes Top of Rack (ToR) Switching Solution The ToR solution includes 36 ports, 40GbE each with the optional 56Gbps licensing for ease of deployment. Public and cluster network uses the same building blocks and, where required, the aggregation layer also uses the same switching solution. Switch: Mellanox SX1036B-1SFS, SwitchX -2 based 36-port QSFP 40GbE 1U Ethernet Switch, 36 QSFP ports, 1 PS, Standard depth, PSU side to Connector side airflow, Rail Kit and RoHS6 (users should verify the required airflow directions) Ceph node to ToR cabling solutions 1. MC , Mellanox Passive Copper Cable, VPI, up to 56Gb/s, QSFP,1meter long 2. MC , Mellanox Passive Copper Cable, VPI, up to 56Gb/s, QSFP,2meter long 3. MC , Mellanox Passive Copper Cable, VPI, up to 56Gb/s, QSFP,3meter long ToR to aggregation/client switching cabling solutions 1. MC , Mellanox Active Fiber Cable, VPI, up to 56Gb/s, QSFP, 10meter long Ceph Client Switching Solution The Clients are connected with a 10G/40GbE switching solution using the optional 56Gbps licensing for ease of deployment Switch: Mellanox MSX1024B-1BFS, SwitchX -2 based 48-port SFP+ 10GbE, 12 port QSFP 40GbE, 1U Ethernet switch. 1PS, Short depth, PSU side to Connector side airflow, Rail kit and ROHS6 (users should verify the required airflow directions) 350 Oakmead Parkway, Suite 100, Sunnyvale, CA Tel: Fax: Mellanox, ConnectX, SwitchX, and Virtual Protocol Interconnect (VPI), are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 3870WP Rev 1.2
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
WHITE PAPER May 2013 SX1024: The Ideal Multi-Purpose Top-of-Rack Switch Introduction...1 Highest Server Density in a Rack...1 Storage in a Rack Enabler...2 Non-Blocking Rack Implementation...3 56GbE Uplink
WHITE PAPER August 2013 SX1012: High Performance Small Scale Top-of-Rack Switch Introduction...1 Smaller Footprint Equals Cost Savings...1 Pay As You Grow Strategy...1 Optimal ToR for Small-Scale Deployments...2
WHITE PAPER March 2014 Installing Hadoop over Ceph, Using High Performance Networking Contents Background...2 Hadoop...2 Hadoop Distributed File System (HDFS)...2 Ceph...2 Ceph File System (CephFS)...3
WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5
Mellanox Accelerated Storage Solutions Moving Data Efficiently In an era of exponential data growth, storage infrastructures are being pushed to the limits of their capacity and data delivery capabilities.
Long-Haul System Family Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity Mellanox continues its leadership by providing RDMA Long-Haul Systems
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION AFFORDABLE, RELIABLE, AND GREAT PRICES FOR EDUCATION Optimized Sun systems run Oracle and other leading operating and virtualization platforms with greater
UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend
Reference Design: Scalable Object Storage with Seagate Kinetic, Supermicro, and SwiftStack May 2015 Copyright 2015 SwiftStack, Inc. swiftstack.com Page 1 of 19 Table of Contents INTRODUCTION... 3 OpenStack
Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions 64% of organizations were investing or planning to invest on Big Data technology
MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX
IronPOD 400 Converged Systems Series Microsoft Hybrid Cloud IaaS Platforms IronPOD System 400 Series System Overview IRON Networks converged system products simplify Microsoft infrastructure deployments
Cost Efficient VDI XenDesktop 7 on Commodity Hardware 1 Introduction An increasing number of enterprises are looking towards desktop virtualization to help them respond to rising IT costs, security concerns,
Parallels Cloud Storage White Paper Best Practices for Configuring a Parallels Cloud Storage Cluster www.parallels.com Table of Contents Introduction... 3 How Parallels Cloud Storage Works... 3 Deploying
Connecting the Clouds Mellanox Connected Clouds Mellanox s Ethernet and InfiniBand interconnects enable and enhance worldleading cloud infrastructures around the globe. Utilizing Mellanox s fast server
Introduction to VMware EVO: RAIL White Paper Table of Contents Introducing VMware EVO: RAIL.... 3 Hardware.................................................................... 4 Appliance...............................................................
WHITE PAPER November 2015 Replacing SAN with High Performance Windows Share over a Converged Network Abstract...1 Introduction...1 Early FC SAN (Storage Area Network)...1 FC vs. Ethernet...1 Changing SAN
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Scientific Computing Data Management Visions ELI-Tango Workshop Szeged, 24-25 February 2015 Péter Szász Group Leader Scientific Computing Group ELI-ALPS Scientific Computing Group Responsibilities Data
Uncompromising Integrity Making 100Gb/s deployments as easy as 10Gb/s Supporting Data Rates up to 100Gb/s Mellanox 100Gb/s LinkX cables and transceivers make 100Gb/s deployments as easy as 10Gb/s. A wide
Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000 Clear the way for new business opportunities. Unlock the power of data. Overcoming storage limitations Unpredictable data growth
IronPOD Piston OpenStack Cloud System Commodity Cloud IaaS Platforms for Enterprises & Service Deploying Piston OpenStack Cloud POC on IronPOD Hardware Platforms Piston and Iron Networks have partnered
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox continues its leadership by providing InfiniBand SDN Switch Systems
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
Scaling from Datacenter to Client KeunSoo Jo Sr. Manager Memory Product Planning Samsung Semiconductor Audio-Visual Sponsor Outline SSD Market Overview & Trends - Enterprise What brought us to NVMe Technology
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
April 2009 Cut I/O Power and Cost while Boosting Blade Server Performance 1.0 Shifting Data Center Cost Structures... 1 1.1 The Need for More I/O Capacity... 1 1.2 Power Consumption-the Number 1 Problem...
High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software White Paper Overview The Micron M500DC SSD was designed after months of close work with major data center service providers and
Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)
Mit Soft- & Hardware zum Erfolg IT-Transformation VCE Converged and Hyperconverged Infrastructure VCE VxRack EMC VSPEX Blue IT-Transformation IT has changed dramatically in last past years The requirements
Can Flash help you ride the Big Data Wave? Steve Fingerhut Vice President, Marketing Enterprise Storage Solutions Corporation Forward-Looking Statements During our meeting today we may make forward-looking
Hyperscale Use Cases for Scaling Out with Flash David Olszewski Business challenges Performanc e Requireme nts Storage Budget Balance the IT requirements How can you get the best of both worlds? SLA Optimized
SUN ORACLE DATABASE MACHINE FEATURES AND FACTS FEATURES From 2 to 8 database servers From 3 to 14 Sun Oracle Exadata Storage Servers Up to 5.3 TB of Exadata QDR (40 Gb/second) InfiniBand Switches Uncompressed
Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012 1 Market Trends Big Data Growing technology deployments are creating an exponential increase in the volume
WHITE PAPER Deploying Flash- Accelerated Hadoop with InfiniFlash from SanDisk 951 SanDisk Drive, Milpitas, CA 95035 2015 SanDisk Corporation. All rights reserved. www.sandisk.com Table of Contents Introduction
WHITE PAPER January, 2013 Power Saving Features in Mellanox Products In collaboration with the European-Commission ECONET Project Introduction... 1 The Multi-Layered Green Fabric... 2 Silicon-Level Power
Building All-Flash Software Defined Storages for Datacenters Ji Hyuck Yun (email@example.com) Storage Tech. Lab SK Telecom Introduction R&D Motivation Synergy between SK Telecom and SK Hynix Service & Solution
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox Smart InfiniBand Switch Systems the highest performing interconnect
Protecting the Data That Drives Business SecureSphere Appliances Scalable. Reliable. Flexible. Imperva SecureSphere appliances provide superior performance and resiliency for demanding network environments.
Protecting the Data That Drives Business SecureSphere Appliances Scalable. Reliable. Flexible. Imperva SecureSphere appliances provide superior performance and resiliency for demanding network environments.
Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity
WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary...
Mellanox WinOF for Windows 8 Quick Start Guide Rev 1.0 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
White Paper BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH Abstract This white paper explains how to configure a Mellanox SwitchX Series switch to bridge the external network of an EMC Isilon
VxRACK : L HYPER-CONVERGENCE AVEC L EXPERIENCE VCE JEUDI 19 NOVEMBRE 2015 Jean-Baptiste ROBERJOT - VCE - Software Defined Specialist Who is VCE Today? #1 Market Share & Gartner MQ position 96% Customer
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled
White Paper Optimizing SQL Server AlwaysOn Implementations with OCZ s ZD-XL SQL Accelerator Delivering Accelerated Application Performance, Microsoft AlwaysOn High Availability and Fast Data Replication
Realizing the next step in storage/converged architectures Imagine having the same data access and processing power of an entire Facebook like datacenter in a single rack of servers Flash Memory Summit
AMD SEAMICRO OPENSTACK BLUEPRINTS CLOUD- IN- A- BOX OCTOBER 2013 OpenStack What is OpenStack? OpenStack is a cloud operaeng system that controls large pools of compute, storage, and networking resources
SN0054584-00 A Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Information
Accelerating Real Time Big Data Applications PRESENTATION TITLE GOES HERE Bob Hansen Apeiron Data Systems Apeiron is developing a VERY high performance Flash storage system that alters the economics of
Performance White Paper Large Unstructured Data Storage in a Small Datacenter Footprint: Cisco UCS C3160 and Red Hat Gluster Storage 500-TB Solution Executive Summary Today, companies face scenarios that
Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance Hybrid Storage Performance Gains for IOPS and Bandwidth Utilizing Colfax Servers and Enmotus FuzeDrive Software NVMe Hybrid
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
Building a cost-effective and high-performing public cloud Sander Cruiming, founder Cloud Provider 1 Agenda Introduction of Cloud Provider Overview of our previous cloud architecture Challenges of that
HyperQ Hybrid Flash Storage Made Easy White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com firstname.lastname@example.org email@example.com
High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology Evaluation report prepared under contract with Brocade Executive Summary As CIOs
Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack HIGHLIGHTS Real-Time Results Elasticsearch on Cisco UCS enables a deeper
VMware Virtual SAN Hardware Guidance TECHNICAL MARKETING DOCUMENTATION v 1.0 Table of Contents Introduction.... 3 Server Form Factors... 3 Rackmount.... 3 Blade.........................................................................3
Introduction Storage Area Networks dominate today s enterprise data centers. These specialized networks use fibre channel switches and Host Bus Adapters (HBAs) to connect to storage arrays. With software,
Design and Sizing Examples: Microsoft Exchange Solutions on VMware Page 1 of 19 Contents 1. Introduction... 3 1.1. Overview... 3 1.2. Benefits of Running Exchange Server 2007 on VMware Infrastructure 3...
SOLID STATE DRIVES AND PARALLEL STORAGE White paper JANUARY 2013 1.888.PANASAS www.panasas.com Overview Solid State Drives (SSDs) have been touted for some time as a disruptive technology in the storage
WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3
White Paper I/O Performance of Cisco UCS M-Series Modular Servers with Cisco UCS M142 Compute Cartridges October 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to
White Paper October 2014 Scaling MySQL Deployments Using HGST FlashMAX PCIe SSDs An HGST and Percona Collaborative Whitepaper Table of Contents Introduction The Challenge Read Workload Scaling...1 Write
Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance Alex Ho, Product Manager Innodisk Corporation Outline Innodisk Introduction Industry Trend & Challenge
ebay Storage, From Good to Great Farid Yavari Sr. Storage Architect - Global Platform & Infrastructure September 11,2014 ebay Journey from Good to Great 2009 to 2011 TURNAROUND 2011 to 2013 POSITIONING
Towards Rack-scale Computing Challenges and Opportunities Paolo Costa firstname.lastname@example.org joint work with Raja Appuswamy, Hitesh Ballani, Sergey Legtchenko, Dushyanth Narayanan, Ant Rowstron Hardware
S T O R A G E Understanding Microsoft Storage Spaces A critical look at its key features and value proposition for storage administrators A Microsoft s Storage Spaces solution offers storage administrators
DATASHEET TM NST6000 UNIFIED HYBRID STORAGE Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment UNIFIED The Nexsan NST6000 unified hybrid storage appliance is ideal for
HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW 757 Maleta Lane, Suite 201 Castle Rock, CO 80108 Brett Weninger, Managing Director email@example.com Dave Smelker, Managing Principal firstname.lastname@example.org
White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba