PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute

Size: px
Start display at page:

Download "PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute"

Transcription

1 PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute

2 Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption 5 Attempted Recovery 5 Disaster Recovery 6 Transferring Data to Temporary Filesystem 6 Rebuilding the Filesystem 6 Tape Restoration 7 Lessons Learned 7 Changelog 7 PADS GPFS Filesystem: Crash Root Cause Analysis

3 Purpose On June 25, 2010 the PADS cluster s GPFS filesystem experienced a catastrophic and fatal corruption. This document s goal is to explain the root cause of the crash, what was done to attempt to recover from it, the lessons learned and the changes made to prevent this in the future. Figure 1. Timeline PADS GPFS Filesystem: Crash Root Cause Analysis 1

4 Terminology The following terms are used throughout this document and are provided here for a better understanding. 8+2 RAID6 RAID level 6 that consists of 8 data disk and 2 distributed parity disks. Active-active Controllers A SAN configuration of 2 controllers where either controller can service I/O for any LUN at any time. Provides higher throughput than an active-passive configuration. Active-passive Controllers A SAN configuration of 2 controllers where only 1 controllers can service I/O for a given LUN at a time. The other controller will take over only in the case of failure of the primary controller. Clustered Filesystem A clustered filesystem is a cluster of servers that work together to provide a single filesystem. Clustered filesystem allow for higher performance by spreading the load and I/O across many servers and greater resilience to server failures. Controller The piece of the SAN storage array responsible for servicing I/O, maintaining RAID integrity, and monitoring the health of the storage array. Data NSD An NSD that contains the actual data portion of files on the GPFS filesystem. DDN DataDirect Networks. We use DDN to mean the disk storage array used - a DataDirect Networks S2A9550 storage array. Disaster Recovery The plan and procedure to follow when a catastrophic and fatal disaster has been encountered. Also referred to as DR. DS4400 IBM s DS4400 disk storage array. Failure Group GPFS NSDs are placed in the same failure group if they have the same points of failure. For instance, all LUNs on the same storage array should be in the same failure group. Failure groups affect how GPFS replicates blocks. FC Fiber Channel. A network technology that is primarily used to transport SCSI commands in a SAN. It currently supports speeds of 1 Gbps, 2 Gbps, 4 Gbps and 8 Gbps. Filesystem Manager GPFS servers. fsck GPFS HBA HCA A GPFS server delegated to coordinate filesystem operations between the various Filesystem check program. Checks the integrity the filesystem. IBM s General Parallel File System. A clustered, parallel filesystem. Host Bus Adapter. The client side FC interconnect card. Host Channel Adapter. The client side IB interconnect card. IB InfiniBand. A high speed, low latency network interconnect. IB topologies are created from lanes - 1 lane (1X) or 4 lanes (4X) - and the data rate - single (SDR), double (DDR), quad (QDR) - of those lanes. 1X SDR is 2.5 Gbps, 1X DDR is 5 Gbps and 1X QDR is 10 Gbps. PADS GPFS Filesystem: Crash Root Cause Analysis 2

5 LUN Logical Unit Number. Used to refer to a SCSI logical unit, a device that performs storage operations such as read and write. A tier can be carved into multiple LUNs. LUN Presentation Defining what LUNs a fiber channel host can see over specific fiber channel ports. Presentations are defined on the DDN. Metadata NSD An NSD that contains the metadata - inode, link references, creation time, modifcation time, etc. - of files on the GPFS filesystem. Multipathing Presenting the same LUN over multiple fiber channel paths either to achieve more resilience against fiber channel port or cable failures or higher throughput by balancing I/O across multiple fiber channel ports. NSD Network Shared Disk. A GPFS abstraction to uniquely define disks in the GPFS filesystem. NSDs allow GPFS to know that two local disks may, in fact, be the same LUN presented using multipathing. NSDs can be data only, metadata only, or data and metadata. Parallel Filesystem A clustered filesystem that allows multiple clients to read and write, in parallel, the same files or the same areas of a file at the same time. Data is striped across multiple storage devices in the filesystem. RAID0 A RAID that stripes data blocks across all disks in the RAID set. Provides high throughput but has no fault tolerance to disk failures in the RAID set. RAID5 disk. A RAID that stripes data blocks across disks in the RAID set and maintains 1 parity RAID6 A RAID that stripes data blocks across disks in the RAID set and maintains 2 distributed parity disks. This provides added protection over RAID level 5 when a disk fails RDMA Remote Direct Memory Access. Access from memory of one computer to that of another without the OS intervention. RDMA can be used over InfiniBand for high-throughput and low-latency networking. Replication high availability reasons. Placing the same data or metadata block on multiple devices for fault tolerance and SAN Storage Area Network. A network architecture that presents remote storage devices, such as disks or tape drives, to servers such that they appear as local devices to the operating system. Tier TSM Verbs The DDN term for a RAID volume. IBM s Tivoli Storage Manager. The backup software we use. InfiniBand functions. PADS GPFS Filesystem: Crash Root Cause Analysis 3

6 Infrastructure The PADS GPFS filesystem is built on top of several hardware components: DDN S2A9550. Consists of 2 active-active controllers with 8 total 4 Gbps FC connections, TB SATA disk drives providing a peak of 3.2 GB/s throughput. There are 48 tiers in an 8+2 RAID6 and each tier provides 1 LUN for a total of 48 LUNs. All LUNs are presented to all 8 FC ports. IBM SAN32B Gbps port FC switch. This is the switch connect between the storage servers and the DDN. 10 IBM x3550 storage servers. Each server has 4 GB of DDR2 RAM, a single dual-core 2.00 GHz Intel Xeon bit CPU, a single port QLogic QLx2460 4Gbps FC HBA and a Mellanox 4X DDR IB HCA. GPFS. We are running GPFS version Figure 2. PADS Interconnect PADS GPFS Filesystem: Crash Root Cause Analysis 4

7 Timeline of Events Background When we were configuring the PADS GPFS filesystem, we consulted with both IBM and DDN for any guidelines or suggestions on the most scalable and high performance configuration to use. We were provided a Best Practices document. In that document it was recommended to separate the metadata NSDs from the data NSDs to obtain the best performance. This is what we did. We made the DDN LUNs data only NSDs. In each storage server was an unused local SATA disk that each were made metadata only LUNs. This configuration is fully supported by GPFS. However, it was quickly realized that this wasn t an optimal configuration. Because metadata was now being kept on disks accessible only to one server, when that server rebooted or crashed, the filesystem would go offline because those metadata blocks could not be accessed. We developed a plan to enable metadata replication so that when one server went offline, the replica server could take over. We went over this change plan with IBM developers and, at their request, changed it so that the metadata disks would be placed in only 2 failure groups. This suggestion was a core reason of our lack of resilience and eventually led, indirectly, to the metadata corruption that eventually crashed the filesystem. Because disks that had different points of failure were in the same failure group, GPFS assumed things that were not true. This led to performance problems and scalability problems. We realized we needed to transition the metadata to SAN disks, but could not use the DDN because they were already configured to be data only. With the UC TeraGrid RP site being decommissioned there was an IBM DS4400 storage array that was no longer in use that would perfectly serve this purpose. We racked, configured and extensively tested this hardware to make sure there were no performance or stability issues that needed solving before hand. We added the DS4400 into the SAN and further tested that the servers were compatible with and handled failures, such as FC links going down and disk failures, gracefully. After all of these tests passed we added the DS4400 LUNs into the GPFS filesystem as metadata disks and let them passively participate for two weeks. We continued stability tests during this time with no interruption to the filesystem or its operations. Corruption On June 23, 2010 we started the process of migrating the metadata off of the local SATA disks in each storage server to the DS4400 storage array. Almost all metadata, >99%, had successfully been migrated to the DS4400 when on June 25, 2010, the migration crashed. It is believed at this time the metadata was left in an unknown and corrupted state. After investigation and observing behavior during the attempted recovery we believe the GPFS filesystem manager (fsmgr) node ran out of memory performing a metadata consistency check. Attempted Recovery On June 25, 2010 we opened a severity 2 ticket with IBM and were directed to run a no-repair fsck on the filesystem. We also announced to the user community the emergency outage and offered to restore any data needed from tape to a temporary location. About 5-6 users asked for portions of projects to be restored, which we did. The fsck was run in no-repair mode so as to only report errors but not attempt to fix them. Once the fsck completed the results were sent to IBM and we were advised to run fsck in repair mode. We started this but were unable to get the fsck to complete. On June 27 we had the ticket escalated to severity 1. On June 29 we discovered that the fsmgr was running out of memory during the fsck and increased the RAM from 4 GB to 12 GB on the server. The fsck continued to fail running the fsmgr out of memory. We then added a server to the cluster with 24 GB of RAM and 32 GB of swap and forced it to be the fsmgr. With the new fsmgr we were able to have the fsck complete and fix some problems, but some PADS GPFS Filesystem: Crash Root Cause Analysis 5

8 problems still remained. After several fscks some inconsistencies remained and would never be repaired. On July 2, IBM advised that the filesystem was irreparable and to implement our disaster recovery procedure. Disaster Recovery On July 2, 2010 we announced our disaster recovery procedure to the user community. We had two goals for the recovery: 1. Recover as much, if not all, of the data on the filesystem. 2. Provide read-only access to the current data during the restoration. To meet these goals we had the following steps: 1. Transfer the current data to a temporary filesystem. (approximately 5 days) 2. Make the data on the temporary filesystem available read-only. 3. Rebuild the filesystem on the DDN array. 4. Start the restore process from tape. (approximately 2-3 weeks) 5. Transfer files from the temporary filesystem that were created or modified after the last taken backup. 6. Release the filesystem and cluster back into operation. Transferring Data to Temporary Filesystem Because the PADS compute cluster nodes already were in a GPFS cluster and there was the high speed IB interconnect between them and the storage nodes and each compute node has roughly 2.5 TB of usable disk capacity we converted the compute node GPFS cluster to be a GPFS filesystem. Each compute node contributed its local RAID0 volume to the filesystem. Because RAID0 is not tolerant to a single disk failure, we enabled replication and ensured each disk was in its own failure group. We opted not to rebuild compute nodes RAID volumes as something more fault tolerant like RAID5 because of the time to do so - roughly 2-3 days for all 48 RAID volumes to initialize. On July 2, 2010 we started copying as much data from the now corrupt filesystem that we could to the temporary GPFS filesystem. We monitored the health of the cluster nodes and their disk during this time with no failures. The data migration completed and appropriate firewall holes were in place on July 6 and the temporary filesystem was made available read-only to users. On July 7, there were hardware failures in two separate nodes: node c05 suffered a disk failure taking the RAID set offline and node c12 s RAID controller failed taking its RAID set offline. Taken separately, these failures would not have been fatal, but combined they destroyed the temporary GPFS filesystem. Rebuilding the Filesystem On July 6 we started the process of recreating the GPFS filesystem. There were 2 tiers in the DDN that still needed to be upgraded to 1TB drives, so we replaced and built those tiers. It took about 1.5 days to build the new tiers. While the tiers were building, we researched to make sure that the new filesystem would be configured for the highest availability possible, best performance possible, and largest amount of usable capacity. We discovered several parameters to PADS GPFS Filesystem: Crash Root Cause Analysis 6

9 modify. These parameter changes are detailed in the Changelog section below. On July 7, the tier building finished and we created the new GPFS filesystem and recreated the project filesystem structure. Tape Restoration After the filesystem was created we attempted to start tape restorations, but encountered bugs in our version of the TSM server. We worked with IBM support to develop workarounds until we could upgrade and on July 8th started restorations from tape. Initially things looked good with the first node restoring around 300 MB/s, but as more nodes started restoring we noticed that 300 MB/s was an aggregate limit. After investigating we discovered that multipathing was incorrectly configured and corrected it. We restarted the restore on July 9 and averaged approximately 450 MB/s with peaks up to 600 MB/s. See the Changelog section for details of the multipath issue. The Argonne Leadership Computing Facility (LCF) division loaned us 6 tape drives, bringing our total drive count to 10. Because of their generosity, we were able to have all 10 storage servers performing restores concurrently. Excluding the two largest projects, all projects were restored by July 14 and we released the filesystem back for full use on July 15. Lessons Learned We have known for some time that placing the metadata on host local disks in two failure groups with replication is a non-standard and sub-optimal configuration and had been working towards a more standard configuration. We were able to apply that knowledge in the creation of the new filesystem. In addition we learned how GPFS accesses data when a node has direct access to the NSDs and have designed the new filesystem to exploit this (see Changelog ). We learned better how multipathing works and how to configure and optimize it (see Changelog ). We learned that some filesystem operations require more memory on the fsmgr node. Because any of the nodes in the cluster could be delegated as the fsmgr, we are increasing the memory on each node from 4 GB to 12 GB. While this is still not enough memory to perform a fsck in one pass, it should prevent running out of memory for all other operations. The extra memory will also allow us to increase the amount of memory GPFS can pin for certain cached operations increasing performance in some cases. Lastly we discovered that our current backup strategy is optimized for backups but not optimally optimized for DR restores. In the coming weeks we will be analyzing how to organize the data on tape and in TSM so that we can backup efficiently, perform accurate accounting and reporting, and restore projects or the whole filesystem as quickly as possible. Changelog The configuration of GPFS, the OS, and the DDN have all been heavily modified based on knowledge we learned prior to this outage and during the reconfiguration of the new filesystem. Below we detail these changes. Consolidate data and metadata. Both data and metadata are now on the same LUNs on the DDN. While this is not the highest performing configuration, it is the most reliable and should still provide very good performance. Fixed multipathing. Because each LUN is presented to all eight ports of the DDN, a server sees the same LUN 8 times resulting in what looks like 8 different disks (/dev/sdc, /dev/sdd, /dev/sde, etc). Multipathing knows that these 8 presentations are all the same LUN and groups them together into one logical disk (e.g., /dev/mpath0). The multi path software is responsible for determining which disk (/dev/sdc, /dev/sdd, /dev/sde, etc) to send I/O too and thereby determining which port on the DDN the I/O is sent over. Previously the multipath software was misconfigured and was PADS GPFS Filesystem: Crash Root Cause Analysis 7

10 sending I/O only to 2 ports on one controller for all LUNs. This meant that 3/4 of our available bandwidth to disk wasn t being utilized and in fact causing contention on those two ports. We ve fixed this so that odd numbered multipath disk (/dev/mpath1, etc) I/O is sent in a round-robin fashion to all 4 ports of controller 1 and all even numbered multipath disk (/dev/mpath0, etc) I/O is sent in a round robin fasion to all 4 ports of controller 2. If a path or controller fails, I/O is sent to the secondary controller. This means that now all I/O is spread evenly over all 8 ports of the DDN and no one controller does too much work (see Figure 3). Enabled InfiniBand RDMA verbs. When the storage array moved physically close to the PADS compute cluster, we connected the storage servers to the cluster IB fabric. We thought we had enabled GPFS to use IB RDMA when we did this, but a missing package was actually silently turning this feature off effectively halving the available bandwidth between the storage servers and to the rest of the compute cluster. RDMA verbs support is now on and fully functional. Present LUNs only to NSD owners. We discovered that if a server can see all the NSDs in the filesystem, that server will perform I/O directly to the NSD regardless of whether it s the NSD owner or not. This meant that for operations that happen directly on the server, like GridFTP or restoration, I/O was not being striped to all servers, but instead only being performed on that server. This meant the maximum available bandwidth for those operations was that of the server s FC connection which is 4 Gbps. To fix this, we present only those LUNs that a server is primary or secondary for thereby forcing I/O to be striped across all nodes. See Figures 4 and 5 for a graphical representation. Up until Wednesday the 14th all servers could see all LUNs and the servers on ports 0/1, 0/2, and 0/3 were performing restores. You can clearly see that those ports are performing the only I/O with some nodes doing nothing. After the 14th we enabled LUN presentation and you can see I/O is almost uniformly spread across all 10 servers. Disable read-ahead prefetch. The DDN can perform read-ahead prefetching in an effort to anticipate the next read request; however, with a parallel filesystem such as GPFS it s very poor at succeeding and so this option can actually be a performance drag. We disabled this and enabled block level OS settings (see below) to allow GPFS to do the read-ahead prefetching. Tune DDN write cache size. We aligned the write cache size to match the RAID stripe and GPFS block size. This should provide a minor performance increase as write operations should all be aligned on the same block. Increase block device read-ahead size. We enabled and increased the default size of the OS block device readahead size to allow GPFS to fetch a larger chunk of data for read-ahead prefetch and caching. Increase block device request size. We increased the size of the OS block I/O request size to allow GPFS to read and write in larger chunks. Tuned FC HBA queue depth. Each port of the DDN has a transaction queue depth of 256. This means that under heavy load or in an effort to bundle I/O requests together, the DDN can queue 256 transaction requests before denying further transactions while the queue drains. We applied a formula to prevent the storage servers from overrunning the DDN port transaction queue. GPFS block size now matches RAID stripe and write cache size. The GPFS block size now matches the DDN tier RAID stripe size. This means writes are aligned on byte boundaries and allows the write cache to perform better. Aligned LUN ownership to match multipath rules. Even though the DDN is an active-active configuration, LUNs are still owned by one of the controllers and there is a small hand-off that happens when the other controller accesses the LUN. To prevent this very minor performance hit, we updated LUN ownership so it matches the multipath rules of PADS GPFS Filesystem: Crash Root Cause Analysis 8

11 odd LUNs owned by controller 1 and even LUNs owned by controller 2. Now the hand-off should only occur when there is problem with one of the controllers or the FC fabric. Increased the number of SSH connections. GPFS uses SSH for communication between nodes. In some cases with the default settings, SSH could deny more connection attempts until others complete causing timeouts and misbehavior of GPFS operations. We increased the number of allowed SSH connections to prevent this. Set a higher amount of reserved virtual memory (VM). GPFS can make use of VM under heavy load. By default the OS reserves some portion of this from not being used by applications, but the default value is too low. We increased this reserved amount to keep GPFS from running the OS out of VM. Figure 3. DDN Throughput per Port PADS GPFS Filesystem: Crash Root Cause Analysis 9

12 Figure 4. Before LUN Presentation PADS GPFS Filesystem: Crash Root Cause Analysis 10

13 Figure 5. After LUN Presentation PADS GPFS Filesystem: Crash Root Cause Analysis 11

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"

GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system Christian Clémençon (EPFL-DIT)  4 April 2013 GPFS Storage Server Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " Agenda" GPFS Overview" Classical versus GSS I/O Solution" GPFS Storage Server (GSS)" GPFS Native RAID

More information

DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution

DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution Tested with: ESRP Storage Version 3.0 Tested Date: Content DELL TM PowerEdge TM T610... 1 500 Mailbox Resiliency

More information

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014 Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,

More information

SMB Direct for SQL Server and Private Cloud

SMB Direct for SQL Server and Private Cloud SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (Fibre Channel/iSCSI) enables SAN tiering Balanced performance well-suited

More information

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

WHITEPAPER: Understanding Pillar Axiom Data Protection Options WHITEPAPER: Understanding Pillar Axiom Data Protection Options Introduction This document gives an overview of the Pillar Data System Axiom RAID protection schemas. It does not delve into corner cases

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This

More information

Configuration Maximums

Configuration Maximums Topic Configuration s VMware vsphere 5.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.1. The limits presented in the

More information

Introduction to MPIO, MCS, Trunking, and LACP

Introduction to MPIO, MCS, Trunking, and LACP Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the

More information

VERITAS Storage Foundation 4.3 for Windows

VERITAS Storage Foundation 4.3 for Windows DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications

More information

VMware Best Practice and Integration Guide

VMware Best Practice and Integration Guide VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,

More information

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

SAN TECHNICAL - DETAILS/ SPECIFICATIONS SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance

More information

Lessons learned from parallel file system operation

Lessons learned from parallel file system operation Lessons learned from parallel file system operation Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association

More information

Large File System Backup NERSC Global File System Experience

Large File System Backup NERSC Global File System Experience Large File System Backup NERSC Global File System Experience M. Andrews, J. Hick, W. Kramer, A. Mokhtarani National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory

More information

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top

More information

High Availability and MetroCluster Configuration Guide For 7-Mode

High Availability and MetroCluster Configuration Guide For 7-Mode Data ONTAP 8.2 High Availability and MetroCluster Configuration Guide For 7-Mode NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1(408) 822-6000 Fax: +1(408) 822-4501 Support telephone:

More information

Best Practices RAID Implementations for Snap Servers and JBOD Expansion

Best Practices RAID Implementations for Snap Servers and JBOD Expansion STORAGE SOLUTIONS WHITE PAPER Best Practices RAID Implementations for Snap Servers and JBOD Expansion Contents Introduction...1 Planning for the End Result...1 Availability Considerations...1 Drive Reliability...2

More information

Using Multipathing Technology to Achieve a High Availability Solution

Using Multipathing Technology to Achieve a High Availability Solution Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

Architecting a High Performance Storage System

Architecting a High Performance Storage System WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division Architecting a High Performance Storage System January 2014 Contents Introduction... 1 A Systematic Approach to

More information

New Storage System Solutions

New Storage System Solutions New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems

More information

RFP-MM-1213-11067 Enterprise Storage Addendum 1

RFP-MM-1213-11067 Enterprise Storage Addendum 1 Purchasing Department August 16, 2012 RFP-MM-1213-11067 Enterprise Storage Addendum 1 A. SPECIFICATION CLARIFICATIONS / REVISIONS NONE B. REQUESTS FOR INFORMATION Oracle: 1) What version of Oracle is in

More information

A SURVEY OF POPULAR CLUSTERING TECHNOLOGIES

A SURVEY OF POPULAR CLUSTERING TECHNOLOGIES A SURVEY OF POPULAR CLUSTERING TECHNOLOGIES By: Edward Whalen Performance Tuning Corporation INTRODUCTION There are a number of clustering products available on the market today, and clustering has become

More information

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID

More information

VTrak 15200 SATA RAID Storage System

VTrak 15200 SATA RAID Storage System Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Virtual Server and Storage Provisioning Service. Service Description

Virtual Server and Storage Provisioning Service. Service Description RAID Virtual Server and Storage Provisioning Service Service Description November 28, 2008 Computer Services Page 1 TABLE OF CONTENTS INTRODUCTION... 4 VIRTUAL SERVER AND STORAGE PROVISIONING SERVICE OVERVIEW...

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

IBM XIV Gen3 Storage System Storage built for VMware vsphere infrastructures

IBM XIV Gen3 Storage System Storage built for VMware vsphere infrastructures Storage built for VMware vsphere infrastructures Peter Kisich IBM Systems and Technology Group ISV Enablement October 2011 Copyright IBM Corporation, 2011 Table of contents Abstract...1 Introduction...1

More information

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper Contents Introduction... 3 Disclaimer... 3 Problem Statement... 3 Storage Definitions... 3 Testing Method... 3 Test

More information

Cisco Active Network Abstraction Gateway High Availability Solution

Cisco Active Network Abstraction Gateway High Availability Solution . Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and

More information

An Oracle White Paper September 2011. Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups

An Oracle White Paper September 2011. Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups An Oracle White Paper September 2011 Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups Table of Contents Introduction... 3 Tape Backup Infrastructure Components... 4 Requirements...

More information

Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager

Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager Hitachi Data System s WebTech Series Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager The HDS WebTech Series Dynamic Load Balancing Who should

More information

Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF

Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF Panasas at the RCF HEPiX at SLAC Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Centralized File Service Single, facility-wide namespace for files. Uniform, facility-wide

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) IntelliMagic, Inc. 558 Silicon Drive Ste 101 Southlake, Texas 76092 USA Tel: 214-432-7920

More information

How To Run Apa Hadoop 1.0 On Vsphere Tmt On A Hyperconverged Network On A Virtualized Cluster On A Vspplace Tmter (Vmware) Vspheon Tm (

How To Run Apa Hadoop 1.0 On Vsphere Tmt On A Hyperconverged Network On A Virtualized Cluster On A Vspplace Tmter (Vmware) Vspheon Tm ( Apache Hadoop 1.0 High Availability Solution on VMware vsphere TM Reference Architecture TECHNICAL WHITE PAPER v 1.0 June 2012 Table of Contents Executive Summary... 3 Introduction... 3 Terminology...

More information

Installation Guide July 2009

Installation Guide July 2009 July 2009 About this guide Edition notice This edition applies to Version 4.0 of the Pivot3 RAIGE Operating System and to any subsequent releases until otherwise indicated in new editions. Notification

More information

SAS Analytics on IBM FlashSystem storage: Deployment scenarios and best practices

SAS Analytics on IBM FlashSystem storage: Deployment scenarios and best practices Paper 3290-2015 SAS Analytics on IBM FlashSystem storage: Deployment scenarios and best practices ABSTRACT Harry Seifert, IBM Corporation; Matt Key, IBM Corporation; Narayana Pattipati, IBM Corporation;

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

IBM Global Technology Services November 2009. Successfully implementing a private storage cloud to help reduce total cost of ownership

IBM Global Technology Services November 2009. Successfully implementing a private storage cloud to help reduce total cost of ownership IBM Global Technology Services November 2009 Successfully implementing a private storage cloud to help reduce total cost of ownership Page 2 Contents 2 Executive summary 3 What is a storage cloud? 3 A

More information

List of Figures and Tables

List of Figures and Tables List of Figures and Tables FIGURES 1.1 Server-Centric IT architecture 2 1.2 Inflexible allocation of free storage capacity 3 1.3 Storage-Centric IT architecture 4 1.4 Server upgrade: preparation of a new

More information

InfoScale Storage & Media Server Workloads

InfoScale Storage & Media Server Workloads InfoScale Storage & Media Server Workloads Maximise Performance when Storing and Retrieving Large Amounts of Unstructured Data Carlos Carrero Colin Eldridge Shrinivas Chandukar 1 Table of Contents 01 Introduction

More information

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery

More information

Network Attached Storage. Jinfeng Yang Oct/19/2015

Network Attached Storage. Jinfeng Yang Oct/19/2015 Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability

More information

August 2009. Transforming your Information Infrastructure with IBM s Storage Cloud Solution

August 2009. Transforming your Information Infrastructure with IBM s Storage Cloud Solution August 2009 Transforming your Information Infrastructure with IBM s Storage Cloud Solution Page 2 Table of Contents Executive summary... 3 Introduction... 4 A Story or three for inspiration... 6 Oops,

More information

Windows 8 SMB 2.2 File Sharing Performance

Windows 8 SMB 2.2 File Sharing Performance Windows 8 SMB 2.2 File Sharing Performance Abstract This paper provides a preliminary analysis of the performance capabilities of the Server Message Block (SMB) 2.2 file sharing protocol with 10 gigabit

More information

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain

More information

The functionality and advantages of a high-availability file server system

The functionality and advantages of a high-availability file server system The functionality and advantages of a high-availability file server system This paper discusses the benefits of deploying a JMR SHARE High-Availability File Server System. Hardware and performance considerations

More information

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business Technical Report Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users Reliable and affordable storage for your business Table of Contents 1 Overview... 1 2 Introduction... 2 3 Infrastructure

More information

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Highlights a Brocade-EMC solution with EMC CLARiiON, EMC Atmos, Brocade Fibre Channel (FC) switches, Brocade FC HBAs, and Brocade

More information

HP reference configuration for entry-level SAS Grid Manager solutions

HP reference configuration for entry-level SAS Grid Manager solutions HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2

More information

www.thinkparq.com www.beegfs.com

www.thinkparq.com www.beegfs.com www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a

More information

Addendum No. 1 to Packet No. 28-13 Enterprise Data Storage Solution and Strategy for the Ingham County MIS Department

Addendum No. 1 to Packet No. 28-13 Enterprise Data Storage Solution and Strategy for the Ingham County MIS Department Addendum No. 1 to Packet No. 28-13 Enterprise Data Storage Solution and Strategy for the Ingham County MIS Department The following clarifications, modifications and/or revisions to the above project shall

More information

Configuration Maximums

Configuration Maximums Topic Configuration s VMware vsphere 5.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.0. The limits presented in the

More information

HBA Virtualization Technologies for Windows OS Environments

HBA Virtualization Technologies for Windows OS Environments HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software

More information

Managing your Domino Clusters

Managing your Domino Clusters Managing your Domino Clusters Kathleen McGivney President and chief technologist, Sakura Consulting www.sakuraconsulting.com Paul Mooney Senior Technical Architect, Bluewave Technology www.bluewave.ie

More information

The Benefit of Migrating from 4Gb to 8Gb Fibre Channel

The Benefit of Migrating from 4Gb to 8Gb Fibre Channel The Benefit of Migrating from 4Gb to 8Gb Fibre Channel Notices The information in this document is subject to change without notice. While every effort has been made to ensure that all information in this

More information

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW 757 Maleta Lane, Suite 201 Castle Rock, CO 80108 Brett Weninger, Managing Director brett.weninger@adurant.com Dave Smelker, Managing Principal dave.smelker@adurant.com

More information

Data Protection Technologies: What comes after RAID? Vladimir Sapunenko, INFN-CNAF HEPiX Spring 2012 Workshop

Data Protection Technologies: What comes after RAID? Vladimir Sapunenko, INFN-CNAF HEPiX Spring 2012 Workshop Data Protection Technologies: What comes after RAID? Vladimir Sapunenko, INFN-CNAF HEPiX Spring 2012 Workshop Arguments to be discussed Scaling storage for clouds Is RAID dead? Erasure coding as RAID replacement

More information

SFA10K-X & SFA10K-E. ddn.com. Storage Fusion Architecture TM. DDN Whitepaper

SFA10K-X & SFA10K-E. ddn.com. Storage Fusion Architecture TM. DDN Whitepaper DDN Whitepaper Storage Fusion Architecture TM SFA10K-X & SFA10K-E Breaking Down Storage Barriers by Providing Extreme Performance in Both Bandwidth and IOPS Table of Contents Abstract 3 Introduction 3

More information

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...

More information

PCI Express and Storage. Ron Emerick, Sun Microsystems

PCI Express and Storage. Ron Emerick, Sun Microsystems Ron Emerick, Sun Microsystems SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature

More information

OPTIMIZING SERVER VIRTUALIZATION

OPTIMIZING SERVER VIRTUALIZATION OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)

More information

FlexArray Virtualization

FlexArray Virtualization Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support

More information

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN Table of contents Executive summary... 2 Introduction... 2 Solution criteria... 3 Hyper-V guest machine configurations...

More information

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7. Step-by-Step Guide to configure on Intel Server Systems R2224GZ4GC4 Software Version: DSS ver. 7.00 up01 Presentation updated: April 2013 www.open-e.com 1 www.open-e.com 2 TECHNICAL SPECIFICATIONS OF THE

More information

Configuration Maximums

Configuration Maximums Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

Price/performance Modern Memory Hierarchy

Price/performance Modern Memory Hierarchy Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion

More information

Globus and the Centralized Research Data Infrastructure at CU Boulder

Globus and the Centralized Research Data Infrastructure at CU Boulder Globus and the Centralized Research Data Infrastructure at CU Boulder Daniel Milroy, daniel.milroy@colorado.edu Conan Moore, conan.moore@colorado.edu Thomas Hauser, thomas.hauser@colorado.edu Peter Ruprecht,

More information

UCS M-Series Modular Servers

UCS M-Series Modular Servers UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend

More information

Module: Business Continuity

Module: Business Continuity Upon completion of this module, you should be able to: Describe business continuity and cloud service availability Describe fault tolerance mechanisms for cloud infrastructure Discuss data protection solutions

More information

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies

More information

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA

More information

The Microsoft Large Mailbox Vision

The Microsoft Large Mailbox Vision WHITE PAPER The Microsoft Large Mailbox Vision Giving users large mailboxes without breaking your budget Introduction Giving your users the ability to store more e mail has many advantages. Large mailboxes

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

Configuration Maximums VMware vsphere 4.1

Configuration Maximums VMware vsphere 4.1 Topic Configuration s VMware vsphere 4.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.1. The limits presented in the

More information

How to Choose your Red Hat Enterprise Linux Filesystem

How to Choose your Red Hat Enterprise Linux Filesystem How to Choose your Red Hat Enterprise Linux Filesystem EXECUTIVE SUMMARY Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to

More information

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution Open-E Data Storage Software and Intel Modular Server a certified virtualization solution Contents 1. New challenges for SME IT environments 2. Open-E DSS V6 and Intel Modular Server: the ideal virtualization

More information

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

HP SN1000E 16 Gb Fibre Channel HBA Evaluation HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance

More information

How To Make A Backup System More Efficient

How To Make A Backup System More Efficient Identifying the Hidden Risk of Data De-duplication: How the HYDRAstor Solution Proactively Solves the Problem October, 2006 Introduction Data de-duplication has recently gained significant industry attention,

More information

RAID Basics Training Guide

RAID Basics Training Guide RAID Basics Training Guide Discover a Higher Level of Performance RAID matters. Rely on Intel RAID. Table of Contents 1. What is RAID? 2. RAID Levels RAID 0 RAID 1 RAID 5 RAID 6 RAID 10 RAID 0+1 RAID 1E

More information

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Best Practice of Server Virtualization Using Qsan SAN Storage System F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Version 1.0 July 2011 Copyright Copyright@2011, Qsan Technology, Inc.

More information

Veritas Storage Foundation High Availability for Windows by Symantec

Veritas Storage Foundation High Availability for Windows by Symantec Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability

More information

SanDisk ION Accelerator High Availability

SanDisk ION Accelerator High Availability WHITE PAPER SanDisk ION Accelerator High Availability 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Introduction 3 Basics of SanDisk ION Accelerator High Availability 3 ALUA Multipathing

More information

Software-defined Storage Architecture for Analytics Computing

Software-defined Storage Architecture for Analytics Computing Software-defined Storage Architecture for Analytics Computing Arati Joshi Performance Engineering Colin Eldridge File System Engineering Carlos Carrero Product Management June 2015 Reference Architecture

More information

1 Storage Devices Summary

1 Storage Devices Summary Chapter 1 Storage Devices Summary Dependability is vital Suitable measures Latency how long to the first bit arrives Bandwidth/throughput how fast does stuff come through after the latency period Obvious

More information

TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS

TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS TECHNOLOGY BRIEF August 1999 Compaq Computer Corporation Prepared by ISSD Technology Communications CONTENTS Executive Summary 1 Introduction 3 Subsystem Technology 3 Processor 3 SCSI Chip4 PCI Bridge

More information

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads 89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report

More information

QuickSpecs. HP Smart Array 5312 Controller. Overview

QuickSpecs. HP Smart Array 5312 Controller. Overview Overview Models 238633-B21 238633-291 (Japan) Feature List: High Performance PCI-X Architecture High Capacity Two Ultra 3 SCSI channels support up to 28 drives Modular battery-backed cache design 128 MB

More information

Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5

Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5 Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5 This document supports the version of each product listed and supports all subsequent versions until the

More information

F600Q 8Gb FC Storage Performance Report Date: 2012/10/30

F600Q 8Gb FC Storage Performance Report Date: 2012/10/30 F600Q 8Gb FC Storage Performance Report Date: 2012/10/30 Table of Content IO Feature Highlights Test Configurations Maximum IOPS & Best Throughput Maximum Sequential IOPS Test Configurations Random IO

More information

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation

EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies

More information

SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study

SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study SAP Solutions on VMware Infrastructure 3: Table of Contents Introduction... 1 SAP Solutions Based Landscape... 1 Logical Architecture... 2 Storage Configuration... 3 Oracle Database LUN Layout... 3 Operations...

More information

Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL

Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive

More information