Application Performance for High Performance Computing Environments

Size: px
Start display at page:

Download "Application Performance for High Performance Computing Environments"

Transcription

1 Application Performance for High Performance Computing Environments Leveraging the strengths of Computationally intensive applications With high performance scale out file serving In data storage modules Intel, Mellanox, Dot Hill for HPC Overview 1

2 OVERVIEW Solving the world s toughest problems requires extreme data analysis. Medical researchers model and simulate genomic structures for the betterment of human health. Climatologists simulate and interpret weather patterns to better understand climate change and more accurately predict devastating storms. Simulations and analysis use data and create data at an immense scale. Computing requirements include massively parallel compute power, high bandwidth interconnects, and fast, scalable storage. Consider the oil and gas industry as an example. The collection and examination of seismic data requires robust and scalable processing and storage to provide analysis and reporting to a broad range of users including geologists, geophysicists, and field personnel. New techniques in marine seismic acquisition and analysis (3D, 4C and 4D), along with evolving seismic technologies such as Time Lapse, Wide Azimuth and Full Azimuth data acquisition are responsible for a sharp increase in the quantity of data that must be stored and processed. Key applications include HPC applications like seismic interpretation, reservoir characterization, prospect evaluation systems, and petrophysical analysis. Data storage systems must adapt to this dynamic landscape by offering solutions that provide density, scalability and performance, and yet are simple to manage and affordable. Dot Hill Systems, Intel and Mellanox have partnered to offer a solution to provide a massively scalable and high performance infrastructure for use in a broad range of high performance computing platforms. THE CHALLENGE Speed When storage needs to be accessed by many computing nodes, such as in large analytical environments with interpretation and characterization of geospatial information, it is important that the storage subsystem delivers superior performance in order to enable the system as a whole to deliver results in a timely manner. Typical workloads involve many nodes or threads performing sequential access of very large files. While the workload of the individual node or thread is sequential in nature, the composite workload of many nodes or threads causes the underlying access pattern to become highly randomized. The storage subsystem needs to respond to this environment by maintaining high sequential throughput as the number of simultaneous streams increases. Capacity Geophysical data consumes tremendous amounts of storage capacity. Typical systems require hundreds of terabytes to petabytes of storage, with much of it needing to be readily accessible for long periods of time. These requirements drive the need for storage solutions that provide high capacity, high density, low cost and low overhead. Intel, Mellanox, Dot Hill for HPC Overview 2

3 Scale To achieve greater speed and capacity, storage systems can either scale up or scale out. It is understood that scale up architectures are ultimately limited by the monolithic design of the system. On the other hand, scale out architectures offer much more opportunity to scale both capacity and performance. Namespace The objective of any storage subsystem is to have a global namespace to access all storage from. This greatly simplifies the management of the storage, since there is no need to manually balance and manage data among multiple storage namespaces. THE SOLUTION The Dot Hill Scale Out, High Performance File Serving solution consists of hardware components, software elements and infrastructure, all architected to provide balance and scalability. This means that the components of the architecture have been sized and selected to match the other components and that the solution is designed with growth in mind. Customers can begin by deploying a modest implementation and then easily add storage modules as their needs expand. Storage. The backbone of the solution is the AssuredSAN 4004 Storage Array. This workhorse includes many key features that allow it to deliver high performance and reliability. Adaptive caching technologies are employed to accommodate dozens of independent streams of data without degrading overall throughput. This is a critical feature for applications that depend upon reliable, high performance data streaming. In addition, the proven % availability of AssuredSAN products virtually eliminates downtime, allowing the datacenter to operate smoothly. Individual arrays support up to 96 Large Form Factor (LFF) disk drives. Using 4TB 7K SAS disk drives, a single system can provide 384TB of raw capacity. File Serving. The Intel Enterprise Edition for Lustre* software, hereafter referred to as Lustre*, provides the high performance distributed file system and namespace component of the solution. Lustre* is an increasingly popular solution in High Performance Computing environments because it enables bandwidth, performance and scaling well beyond the limits of traditional storage technology. In fact, Lustre* is the file system that is most widely used by the world s top 500 supercomputing sites. The Lustre* architecture consists of Object Storage Servers (OSS), Metadata Servers (MDS) and client nodes connected over a high speed network. Lustre* offers highly scalable network connectivity to the Lustre* clients, with higher per stream performance than that of NFS or CIFS. The Intel Enterprise Edition for Lustre* software includes a rich set of manageability components designed to simplify many tasks associated with configuring, deploying, tuning, managing and maintaining Lustre*. Networking. Mellanox is an industry leader in providing high speed networking components and infrastructure. To realize the benefits of high performance storage subsystems, it is important to couple them with a high speed network to the end clients. Mellanox InfiniBand solutions support the FDR 56 Gbps rate, which is the highest throughput and lowest latency interconnect available on the market today. In addition, Mellanox network solutions utilize Remote Direct Memory Access (RDMA) protocols to make more efficient use of compute resources. Using RDMA over InfiniBand, data transfer latencies can be reduced by over 90% and CPU efficiencies can by elevated up to 96%. Architecture. The solution comes together as described in Figure 1. Intel, Mellanox, Dot Hill for HPC Overview 3

4 Storage Modules. One or more Storage Modules form the basis of the data repository. Each module consists of a Dot Hill AssuredSAN array and two industry standard x64 servers. The servers are connected to the array with common 12 Gbps SAS interconnects, and connect to the file serving network via high speed InfiniBand. These servers host Linux as the OS and the Lustre* File Server (OSS) component. Each server pair is clustered together for high availability. The maximum raw capacity of a single Storage Module is 1344TB (4 chassis of 56 drives with 6TB drives). Metadata Module. One Metadata Module is required to provide the necessary file locking and file integrity of the distributed environment. This module consists of a pair of clustered servers, loaded with Linux and the Lustre* Metadata (MDS) component. The servers connect to a dedicated AssuredSAN array via SAS interconnects, and to the file serving network via high speed InfiniBand. Client Connectivity. The client nodes in the network host the end user applications that process the data. These nodes include client software for Lustre* file serving, and connect to the network via high speed InfiniBand. Networking. InfiniBand switches and adapter cards complete the solution. This infrastructure is necessary to provide the high bandwidth streaming required in many HPC solutions in general, and seismic analysis and processing within the Energy Industry in particular. Intel, Mellanox, Dot Hill for HPC Overview 4

5 Figure 1 Lustre* Solution Architecture SOLUTION BEST PRACTICES Each building block of the solution contributes to the performance. A successful configuration requires careful planning. In particular, these steps will step over common pitfalls, and ensure success. Intel, Mellanox, Dot Hill for HPC Overview 5

6 Storage Configuration Redundant Cabling: The 4004 SAS controllers provide 8 SAS host ports among 2 active active controllers in a single chassis. There are two key steps. First, ensure each OSS/MDS server is connected with a minimum of 2 cables to the storage array. This ensures storage resources are presented identically to each server in the OSS/MDS clustered pair. Second, ensure each 2 cable pair is balanced across the A and B controllers. This ensures that if one controller or path to the controller fails, the OSS/MDS has an alternate path to the LUNs. Figure 2 depicts this layout. Figure 2 Redundant cabling Segregate Metadata and Object storage: The I/O patterns for metadata and object storage are quite different. Consider how Lustre* clients interact with the MDS and OSS. A Lustre* Client must interact with the MDS before any file operations can occur. Client to MDS interactions consist of small transactions to open, close, create, and delete files. The MDS is the single coordination point, and a Lustre* Client may not access a file on an OSS until it has coordinated via the MDS. The MDS will hit the storage controller with lots of random, small block transfers of information. High IOPS is the key requirement for an MDS. After gaining access to a file via the MDS, Lustre* clients interact with OSS(s) to read/write files. Client to OSS interactions consist of large, 1MB transfers of file data. In contrast to the MDS, the OSS will hit the storage controllers with many streams of sequential, large block transfers to read/write the customer s data. Sustained throughput is the key requirement for an OSS. High IOPS and sustained throughput are competing I/O patterns. At scale, a single set of controllers could be a choke point. To ensure robust performance, utilize unique arrays for metadata and object storage. o Metadata Arrays: Arrays for metadata should consist of faster rotating disks (10K/15K RPM) or SSDs. Faster rotating disks or SSDs are appropriate for applications requiring high IOPS. o Object Storage Arrays: Arrays for object storage typically consist of higher capacity, slowerrotating disks usually 7K RPM with capacities of 3, 4, or 6TB. Higher capacity drives provide Intel, Mellanox, Dot Hill for HPC Overview 6

7 more storage for the large file sizes produced in HPC environments; however, there may be instances where SFF 10K 1TB drives could be more appropriate in an environment requiring a balance of density and speed. Disk groups/luns: Dot Hill terminology refers to physical raid sets as disk groups. One or more LUNs can be carved off of disk groups and presented to hosts. The Disk groups for object storage and metadata have different requirements; thus, they require different configuration. o Metadata Targets (MDT): MDTs must rapidly service random I/O requests for small pieces of information (essentially file records). Additionally, they are the single source of file information; thus, they need a high level of fault tolerance. RAID 10 provides outstanding I/O rates with excellent redundancy. o Object Storage Targets (OST): A single Lustre* file can grow to 32 PB. OST s need a balance between capacity utilization and redundancy. RAID 6 provides excellent redundancy and capacity utilization. An important consideration with RAID 6 is ensuring writes are as efficient as possible. The important data point here is that Lustre* utilizes a 1MB block size. RAID 6 stripes data across the Disk group. Assume N is the total amount of disks in a RAID 6 disk group. Each stripe consists of N 2 data disks and 2 parity disks. Parity is calculated from the data disks that comprise a stripe using XOR. You want a disk group configuration that splits the Lustre* 1MB block transfer evenly across every data disks in the disk group. This is called a full stripe write, and they critical to performance in parity based RAID. To ensure full stripe writes, create disk groups that have data disk totals equal to a power of 2. For instance, a RAID6 disk group of 10 disks (8 data & 2 parity) and a chunk size of 128Kb will perfectly split the 1MB Lustre* block size evenly across the 8 data disks. o o o Management Storage Target (MGT): The MGT has much smaller needs when compared to an OST; it can be as small as 100MB; however, it is an important component to the solution so redundancy is still a requirement. Create the MGT on the same disk group utilized by the MDT. The MGT would gain the performance and redundancy benefits of RAID 10, and the small capacity used by the MGT won t impact the MDT. An alternative configuration could dedicate a single RAID1 disk group to the MGT. LUNs: Create one LUN on top of each Disk group, and utilize the entire capacity of the disk group. The exception noted above is a single MGT LUN and MDT LUN combined on the MDT disk group. Give the LUNs descriptive names to ease management i.e. ( OSS1 _OST4 ). Host access control: Direct attach SAS is typically used between OSS/MDS servers and the storage arrays; this avoids the costs of extraneous switches and the complex management of typical of fibre channel zones. Use default mapping with the Dot Hill array; this gives full access to all LUNs across all ports on both controllers to any initiator that connects to the array. This ensures full multipath discovery from the OSS/MDS hosts, and simplifies overall management. Server Configuration Intel, Mellanox, Dot Hill for HPC Overview 7

8 Multipath: Multiple, redundant paths between the OSS/MDS hosts and the storage array ensure the Lustre* file system remains online in the event of a node or path failure. However, incorrect multipath configuration can be a subtle form of poor performance. It s critical to get it right. Dot Hill arrays comply with the SCSI 3 standard for Asymmetrical Logical Unit Access (ALUA). ALUA compliant arrays will provide optimal and non optimal path information to the host during device discovery, but the operating system must be directed to use ALUA. Here are the key steps: 1. Ensure the multipath daemon is installed and set to start at run time. a. Linux command: chkconfig multipathd on 2. Ensure the correct entries exist the /etc/multipath.conf file on each OSS/MDS host. Create a separate device entry for the Dot Hill array. There are 4 key attributes that should be set. a. prio=alua b. failback=immediate c. vendor= DotHill d. product= <the correct product ID> i. Run the Linux command multipath v3 to obtain the exact vendor and product IDs. 3. Instruct the multipath daemon to reload the multipath.conf file or reboot the server a. Linux command: service multipathd reload. 4. Determine if the multipath daemon used ALUA to obtain the optimal/non optimal paths a. Linux command: multipath v3 grep alua b. You should see output stating that ALUA was used to configure the path priorities. Example Oct 01 14:28:43 sdb: prio = alua (controller setting) Oct 01 14:28:43 sdb: alua prio = 130 Linux I/O Tuning: The Linux OS can certainly have an impact on the performance of the Lustre* solution. Tunable settings should be altered and investigated to obtain optimal performance o Linux Block I/O scheduler: There are 3 common schedulers. Completely Fair Queuing (CFQ): CFQ is the default setting, and attempts to provide fairness to I/O scheduling via user defined classes and priorities. The CFQ setting can interfere with the storage arrays ability to read incoming I/O patterns and perform its own optimizations. Don t use this setting. Deadline: Deadline scheduling attempts to provide guaranteed latencies. Latency measurements start at the time the I/O arrives at the scheduler (consider how many I/O hops happen before and after the scheduler). Furthermore, deadline scheduling performs I/O interleaving, temporarily blocking some I/O in order to provide combine batches in increasing logical block address (LBA) order. This scheduler can also interfere with the storage arrays ability to read and optimize I/O patterns. Don t use this setting. Intel, Mellanox, Dot Hill for HPC Overview 8

9 o Noop: The Noop scheduler implements a simple, first in first out (FIFO) queue such that the storage controller can correctly optimize on I/O. Use this setting. File system tuning: There are a number of tunable settings that can be experimented with utilizing the Linux blockdev command. One of them is the block read ahead size. The default read ahead size is 256 sectors. This equates to 256 * 512 byte sector sizes. Experiment with larger settings to determine if this improves performance. Benchmarking Test each unit in isolation: While it s tempting to immediately run applications from the Lustre* clients, a more methodical approach will expose the source of performance bottlenecks before the solution scales. Test first at the lowest layer, verify results are satisfactory, and then add another layer and test. Sequentially add layers and test, and verify results are expected at each layer. Usually, you ll see a slight drop in performance as each layer is added. An exception is when you start testing at the highest layers the Lustre* OSS/MDS and clients. Testing at these layers measures aggregated performance; thus, the overall performance will look better. 1. Individual LUNs: The Sgpdd_survey tool from the Lustre* IO kit can be used to send I/O to the MDT/OST LUNs. Other tools include aio stress, or iometer. CAUTION: These tests are destructive, and should not be run against LUNs containing user data. Raw LUN I/O: Send direct I/O to each OST and MDT. Run I/O to one LUN at a time, and look for obvious differences in throughput and latencies between like LUNs (i.e. compare all OST LUNs). Obvious differences may signal a single disk error within a particular disk group. Tip: Use the Linux raw command bind a block device to a raw character device. This ensures no kernel/os caches or buffers are used Block I/O: Next, run the same tests against the same LUNs using the block device. You will see a slight performance drop because the OS is buffering. Remember to choose a block device in which the path flows through the owning controller. Again, compare I/O between LUNs, and look for obvious differences. Multipath I/O: Next, run I/O through the multipath device. Once more, compare I/O between LUNs, and look for obvious differences. 2. OSS/MDS layer: obdfilter survey: This tool, from the Lustre* I/O kit can be executed from each OSS server in local file system mode. This mode tests the local file system on each OSS that sits on top of the OSTs. It tests above the hardware, but below the RPC layer. CAUTION: Premature termination of an execution of obdfilter survey could leave your filesystem in a non pristine state. mds survey: This tool, also from the Lustre* I/O kit tests the MDS metadata layer. The tool allows the tester to specify a number of threads, and measures the latencies for file create, stat, and delete. CAUTION: Premature termination of an execution of mds survey could leave your filesystem in a non pristine state. 3. Lustre* clients: Intel, Mellanox, Dot Hill for HPC Overview 9

10 IOZone: IOZone can test many different I/O patterns against the Lustre* File system for a single client. Also, it can operate in distributed mode to test many clients simultaneously. 4. LNET Self test: The LNet self test provides two utilities that can be used to performance test the Lustre* network. These tests should be run after all lower layers have been thoroughly tested, and validated. Utilize trusted solutions Intel Enterprise Edition for Lustre* (IEEL): This package provides 2 main benefits o Ease of installation: Lustre* is complex. Significant expertise with Linux, networking, and storage are required to successfully download and deploy the open source kit. The Intel Enterprise Edition software simplifies this complexity. The Inte l Manger for Lustre* (IML) software guides you through installation and configuration two of the most difficult aspects of Lustre*. Standing up a Lustre* configuration is a much more attainable goal with Intel s software o Excellent management: The IML provides useful graphs representing OST heat maps, I/O read/write patterns across OSTs, metadata patterns, and memory/cpu patterns of the OSS. A REST API allows remote scripting/monitoring using wget, curl, or python Mellanox switch: Mellanox switches and HBA s provide an end to end, high speed, reliable network for HPC networks. o Ease of installation: The network was up and running in less than 30 minutes. We used a serial connection via one of Lustre* clients to configure the intial IP address and turn on management. After that, we pointed a browser to it, turned on the subnet manager, and our network was up, and running. Mellanox provides a rich CLI for command line configuration and monitoring. SOLUTION BENEFITS Availability. Redundant Fail over components provide high availability of storage subsystem. Protection. Enterprise class components and drives, along with RAID technology protect valuable data sets Scalability. Add Storage Modules as needed to expand to petabytes of usable capacity. Performance. Individual Storage Modules offer 6 GB/s throughput, scaled linearly with added modules. InfiniBand network paths can deliver up to 56Gbps throughput to each compute node. Neutrality. Avoid vendor lock in by deploying Open Source solutions. Manageability. The Intel Enterprise Edition for Lustre* software includes a rich set of management tools. Namespace. The unified namespace offered by Lustre* eliminates the need to micromanage storage pools. Intel, Mellanox, Dot Hill for HPC Overview 10

11 Capacity. Each individual Storage Module can be configured with as much as 1344 TB of raw capacity. Scale out the solution with multiple Storage Modules to obtain petabytes of usable capacity. SOLUTION COMPONENTS Dot Hill AssuredSAN 4854 SAS RAID Storage Arrays for primary data Dot Hill AssuredSAN 4524 SAS RAID Storage Array for Metadata LSI SAS9300 8e SAS HBAs Industry Standard x64 Servers for File Serving and Metadata Enterprise Linux Intel Enterprise Edition for Lustre* software Mellanox ConnectX 3 InfiniBand adapter cards Mellanox InfiniBand Switches About Dot Hill Systems Dot Hill has been delivering smart, simple, storage solutions for 29 years and has shipped over 600,000 units world wide. Dot Hill's solutions combine a flexible and extensive hardware platform with an easy to use management interface to deliver highly available and scalable SAN solutions. AssuredSAN arrays provide a high performance storage solution ideal for large datasets and multiple compute nodes common to the oil and gas exploration industry. The AssuredSAN 4004 features 12Gb SAS host connections, % availability, and scale up to 384 terabytes in a single array. Visit Dot Hill at and our partner portal at partners.dothill.com. About Intel Intel (NASDAQ: INTC) is a world leader in computing innovation. The company designs and builds the essential technologies that serve as the foundation for the world s computing devices. Visit Intel at lustre.intel.com. About Mellanox Technologies Mellanox Technologies is a leading supplier of end to end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software, cables and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. Visit Mellanox at Copyright Dot Hill Systems Corporation. All rights reserved. Dot Hill Systems Corp., Dot Hill, the Dot Hill logo, and AssuredSAN are trademarks or registered trademarks of Dot Hill Systems. All other trademarks are the property of their respective companies in the United States and/or other countries Intel, Mellanox, Dot Hill for HPC Overview 11

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products

More information

Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000

Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000 Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products that

More information

New Storage System Solutions

New Storage System Solutions New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems

More information

VMware Best Practice and Integration Guide

VMware Best Practice and Integration Guide VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,

More information

Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre

Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre University of Cambridge, UIS, HPC Service Authors: Wojciech Turek, Paul Calleja, John Taylor

More information

NetApp High-Performance Computing Solution for Lustre: Solution Guide

NetApp High-Performance Computing Solution for Lustre: Solution Guide Technical Report NetApp High-Performance Computing Solution for Lustre: Solution Guide Robert Lai, NetApp August 2012 TR-3997 TABLE OF CONTENTS 1 Introduction... 5 1.1 NetApp HPC Solution for Lustre Introduction...5

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000 Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products

More information

Scala Storage Scale-Out Clustered Storage White Paper

Scala Storage Scale-Out Clustered Storage White Paper White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current

More information

Architecting a High Performance Storage System

Architecting a High Performance Storage System WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division Architecting a High Performance Storage System January 2014 Contents Introduction... 1 A Systematic Approach to

More information

SMB Direct for SQL Server and Private Cloud

SMB Direct for SQL Server and Private Cloud SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server

More information

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014 Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,

More information

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark

More information

High Performance Computing OpenStack Options. September 22, 2015

High Performance Computing OpenStack Options. September 22, 2015 High Performance Computing OpenStack PRESENTATION TITLE GOES HERE Options September 22, 2015 Today s Presenters Glyn Bowden, SNIA Cloud Storage Initiative Board HP Helion Professional Services Alex McDonald,

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation SMB Advanced Networking for Fault Tolerance and Performance Jose Barreto Principal Program Managers Microsoft Corporation Agenda SMB Remote File Storage for Server Apps SMB Direct (SMB over RDMA) SMB Multichannel

More information

HP Smart Array Controllers and basic RAID performance factors

HP Smart Array Controllers and basic RAID performance factors Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array

More information

Windows 8 SMB 2.2 File Sharing Performance

Windows 8 SMB 2.2 File Sharing Performance Windows 8 SMB 2.2 File Sharing Performance Abstract This paper provides a preliminary analysis of the performance capabilities of the Server Message Block (SMB) 2.2 file sharing protocol with 10 gigabit

More information

EMC XTREMIO EXECUTIVE OVERVIEW

EMC XTREMIO EXECUTIVE OVERVIEW EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying

More information

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...

More information

Highly-Available Distributed Storage. UF HPC Center Research Computing University of Florida

Highly-Available Distributed Storage. UF HPC Center Research Computing University of Florida Highly-Available Distributed Storage UF HPC Center Research Computing University of Florida Storage is Boring Slow, troublesome, albatross around the neck of high-performance computing UF Research Computing

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu

More information

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC

More information

FlexArray Virtualization

FlexArray Virtualization Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Building a Scalable Storage with InfiniBand

Building a Scalable Storage with InfiniBand WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5

More information

Improving Time to Results for Seismic Processing with Paradigm and DDN. ddn.com. DDN Whitepaper. James Coomer and Laurent Thiers

Improving Time to Results for Seismic Processing with Paradigm and DDN. ddn.com. DDN Whitepaper. James Coomer and Laurent Thiers DDN Whitepaper Improving Time to Results for Seismic Processing with Paradigm and DDN James Coomer and Laurent Thiers 2014 DataDirect Networks. All Rights Reserved. Executive Summary Companies in the oil

More information

SAS Analytics on IBM FlashSystem storage: Deployment scenarios and best practices

SAS Analytics on IBM FlashSystem storage: Deployment scenarios and best practices Paper 3290-2015 SAS Analytics on IBM FlashSystem storage: Deployment scenarios and best practices ABSTRACT Harry Seifert, IBM Corporation; Matt Key, IBM Corporation; Narayana Pattipati, IBM Corporation;

More information

HP reference configuration for entry-level SAS Grid Manager solutions

HP reference configuration for entry-level SAS Grid Manager solutions HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2

More information

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Built up on Cisco s big data common platform architecture (CPA), a

More information

Building Enterprise-Class Storage Using 40GbE

Building Enterprise-Class Storage Using 40GbE Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance

More information

Big data management with IBM General Parallel File System

Big data management with IBM General Parallel File System Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers

More information

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution Arista 10 Gigabit Ethernet Switch Lab-Tested with Panasas ActiveStor Parallel Storage System Delivers Best Results for High-Performance and Low Latency for Scale-Out Cloud Storage Applications Introduction

More information

Using Multipathing Technology to Achieve a High Availability Solution

Using Multipathing Technology to Achieve a High Availability Solution Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend

More information

IP SAN Best Practices

IP SAN Best Practices IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly

More information

Zadara Storage Cloud A whitepaper. @ZadaraStorage

Zadara Storage Cloud A whitepaper. @ZadaraStorage Zadara Storage Cloud A whitepaper @ZadaraStorage Zadara delivers two solutions to its customers: On- premises storage arrays Storage as a service from 31 locations globally (and counting) Some Zadara customers

More information

WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE

WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE WHITE PAPER BASICS OF DISK I/O PERFORMANCE WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE This technical documentation is aimed at the persons responsible for the disk I/O performance

More information

Software-defined Storage Architecture for Analytics Computing

Software-defined Storage Architecture for Analytics Computing Software-defined Storage Architecture for Analytics Computing Arati Joshi Performance Engineering Colin Eldridge File System Engineering Carlos Carrero Product Management June 2015 Reference Architecture

More information

Installing Hadoop over Ceph, Using High Performance Networking

Installing Hadoop over Ceph, Using High Performance Networking WHITE PAPER March 2014 Installing Hadoop over Ceph, Using High Performance Networking Contents Background...2 Hadoop...2 Hadoop Distributed File System (HDFS)...2 Ceph...2 Ceph File System (CephFS)...3

More information

WHITE PAPER Optimizing Virtual Platform Disk Performance

WHITE PAPER Optimizing Virtual Platform Disk Performance WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower

More information

UCS M-Series Modular Servers

UCS M-Series Modular Servers UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend

More information

BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out

BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out Contents Introduction... 3 Terminology... 3 Planning Scale-Out Clusters and Pools... 3 Cluster Arrays Based on Management Boundaries...

More information

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used

More information

Introduction to Gluster. Versions 3.0.x

Introduction to Gluster. Versions 3.0.x Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Deployments and Tests in an iscsi SAN

Deployments and Tests in an iscsi SAN Deployments and Tests in an iscsi SAN SQL Server Technical Article Writer: Jerome Halmans, Microsoft Corp. Technical Reviewers: Eric Schott, EqualLogic, Inc. Kevin Farlee, Microsoft Corp. Darren Miller,

More information

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top

More information

Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper

Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage White Paper June 2011 2011 Coraid, Inc. Coraid, Inc. The trademarks, logos, and service marks (collectively "Trademarks") appearing on the

More information

Benchmarking Hadoop & HBase on Violin

Benchmarking Hadoop & HBase on Violin Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This

More information

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions

Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions WHITE PAPER May 2014 Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions Contents Executive Summary...2 Background...2 Network Configuration...3 Test

More information

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD

More information

I/O Performance of Cisco UCS M-Series Modular Servers with Cisco UCS M142 Compute Cartridges

I/O Performance of Cisco UCS M-Series Modular Servers with Cisco UCS M142 Compute Cartridges White Paper I/O Performance of Cisco UCS M-Series Modular Servers with Cisco UCS M142 Compute Cartridges October 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

More information

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3

More information

Enabling High performance Big Data platform with RDMA

Enabling High performance Big Data platform with RDMA Enabling High performance Big Data platform with RDMA Tong Liu HPC Advisory Council Oct 7 th, 2014 Shortcomings of Hadoop Administration tooling Performance Reliability SQL support Backup and recovery

More information

Solution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux

Solution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Solution Brief July 2014 All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Traditional SAN storage systems cannot keep up with growing application performance needs.

More information

Boost Database Performance with the Cisco UCS Storage Accelerator

Boost Database Performance with the Cisco UCS Storage Accelerator Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to

More information

SQL Server Business Intelligence on HP ProLiant DL785 Server

SQL Server Business Intelligence on HP ProLiant DL785 Server SQL Server Business Intelligence on HP ProLiant DL785 Server By Ajay Goyal www.scalabilityexperts.com Mike Fitzner Hewlett Packard www.hp.com Recommendations presented in this document should be thoroughly

More information

RealStor 2.0 Provisioning and Mapping Volumes

RealStor 2.0 Provisioning and Mapping Volumes RealStor 2.0 Provisioning and Mapping Volumes What you ll learn: Learn about new concepts and terms for provisioning storage pools from disks and creating volumes with RealStor Use the new Storage Management

More information

Violin Memory Arrays With IBM System Storage SAN Volume Control

Violin Memory Arrays With IBM System Storage SAN Volume Control Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This

More information

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) IntelliMagic, Inc. 558 Silicon Drive Ste 101 Southlake, Texas 76092 USA Tel: 214-432-7920

More information

February, 2015 Bill Loewe

February, 2015 Bill Loewe February, 2015 Bill Loewe Agenda System Metadata, a growing issue Parallel System - Lustre Overview Metadata and Distributed Namespace Test setup and implementation for metadata testing Scaling Metadata

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads 89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report

More information

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW 757 Maleta Lane, Suite 201 Castle Rock, CO 80108 Brett Weninger, Managing Director brett.weninger@adurant.com Dave Smelker, Managing Principal dave.smelker@adurant.com

More information

Taking Linux File and Storage Systems into the Future. Ric Wheeler Director Kernel File and Storage Team Red Hat, Incorporated

Taking Linux File and Storage Systems into the Future. Ric Wheeler Director Kernel File and Storage Team Red Hat, Incorporated Taking Linux File and Storage Systems into the Future Ric Wheeler Director Kernel File and Storage Team Red Hat, Incorporated 1 Overview Going Bigger Going Faster Support for New Hardware Current Areas

More information

Intel RAID SSD Cache Controller RCS25ZB040

Intel RAID SSD Cache Controller RCS25ZB040 SOLUTION Brief Intel RAID SSD Cache Controller RCS25ZB040 When Faster Matters Cost-Effective Intelligent RAID with Embedded High Performance Flash Intel RAID SSD Cache Controller RCS25ZB040 When Faster

More information

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

EMC XtremSF: Delivering Next Generation Performance for Oracle Database White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2 Using Synology SSD Technology to Enhance System Performance Based on DSM 5.2 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD Cache as Solution...

More information

Lustre * Filesystem for Cloud and Hadoop *

Lustre * Filesystem for Cloud and Hadoop * OpenFabrics Software User Group Workshop Lustre * Filesystem for Cloud and Hadoop * Robert Read, Intel Lustre * for Cloud and Hadoop * Brief Lustre History and Overview Using Lustre with Hadoop Intel Cloud

More information

An Alternative Storage Solution for MapReduce. Eric Lomascolo Director, Solutions Marketing

An Alternative Storage Solution for MapReduce. Eric Lomascolo Director, Solutions Marketing An Alternative Storage Solution for MapReduce Eric Lomascolo Director, Solutions Marketing MapReduce Breaks the Problem Down Data Analysis Distributes processing work (Map) across compute nodes and accumulates

More information

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database Performance Advantages for Oracle Database At a Glance This Technical Brief illustrates that even for smaller online transaction processing (OLTP) databases, the Sun 8Gb/s Fibre Channel Host Bus Adapter

More information

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Connect Executive Overview This white paper describes how Cisco VFrame Server Fabric ization Software works with IBM BladeCenter H to provide

More information

A Survey of Shared File Systems

A Survey of Shared File Systems Technical Paper A Survey of Shared File Systems Determining the Best Choice for your Distributed Applications A Survey of Shared File Systems A Survey of Shared File Systems Table of Contents Introduction...

More information

How to Choose your Red Hat Enterprise Linux Filesystem

How to Choose your Red Hat Enterprise Linux Filesystem How to Choose your Red Hat Enterprise Linux Filesystem EXECUTIVE SUMMARY Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to

More information

HP ProLiant DL380p Gen8 1000 mailbox 2GB mailbox resiliency Exchange 2010 storage solution

HP ProLiant DL380p Gen8 1000 mailbox 2GB mailbox resiliency Exchange 2010 storage solution Technical white paper HP ProLiant DL380p Gen8 1000 mailbox 2GB mailbox resiliency Exchange 2010 storage solution Table of contents Overview 2 Disclaimer 2 Features of the tested solution 2 Solution description

More information

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

Q & A From Hitachi Data Systems WebTech Presentation:

Q & A From Hitachi Data Systems WebTech Presentation: Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,

More information

Performance and scalability of a large OLTP workload

Performance and scalability of a large OLTP workload Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............

More information

N8103-149/150/151/160 RAID Controller. N8103-156 MegaRAID CacheCade. Feature Overview

N8103-149/150/151/160 RAID Controller. N8103-156 MegaRAID CacheCade. Feature Overview N8103-149/150/151/160 RAID Controller N8103-156 MegaRAID CacheCade Feature Overview April 2012 Rev.1.0 NEC Corporation Contents 1 Introduction... 3 2 Types of RAID Controllers... 3 3 New Features of RAID

More information

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

AirWave 7.7. Server Sizing Guide

AirWave 7.7. Server Sizing Guide AirWave 7.7 Server Sizing Guide Copyright 2013 Aruba Networks, Inc. Aruba Networks trademarks include, Aruba Networks, Aruba Wireless Networks, the registered Aruba the Mobile Edge Company logo, Aruba

More information

PERFORMANCE TUNING ORACLE RAC ON LINUX

PERFORMANCE TUNING ORACLE RAC ON LINUX PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database

More information

Current Status of FEFS for the K computer

Current Status of FEFS for the K computer Current Status of FEFS for the K computer Shinji Sumimoto Fujitsu Limited Apr.24 2012 LUG2012@Austin Outline RIKEN and Fujitsu are jointly developing the K computer * Development continues with system

More information

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption

More information

Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle

Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server

More information

Replacing SAN with High Performance Windows Share over a Converged Network

Replacing SAN with High Performance Windows Share over a Converged Network WHITE PAPER November 2015 Replacing SAN with High Performance Windows Share over a Converged Network Abstract...1 Introduction...1 Early FC SAN (Storage Area Network)...1 FC vs. Ethernet...1 Changing SAN

More information