Xyratex Update. Michael K. Connolly. Partner and Alliances Development

Similar documents
Improving Lustre OST Performance with ClusterStor GridRAID. John Fragalla Principal Architect High Performance Computing

Easier - Faster - Better

An Alternative Storage Solution for MapReduce. Eric Lomascolo Director, Solutions Marketing

Sonexion GridRAID Characteristics

Dynamic Disk Pools Delivering Worry-Free Storage

Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000

Netapp HPC Solution for Lustre. Rich Fenton UK Solutions Architect

Seagate Lustre Update. Peter Bojanic

February, 2015 Bill Loewe

Scala Storage Scale-Out Clustered Storage White Paper

RAID 5 rebuild performance in ProLiant

New Storage System Solutions

General Parallel File System (GPFS) Native RAID For 100,000-Disk Petascale Systems

Moving Beyond RAID DXi and Dynamic Disk Pools

RAID Implementation for StorSimple Storage Management Appliance

Current Status of FEFS for the K computer

Nutanix Tech Note. Failure Analysis All Rights Reserved, Nutanix Corporation

Best Practices RAID Implementations for Snap Servers and JBOD Expansion

An Oracle White Paper November Oracle Real Application Clusters One Node: The Always On Single-Instance Database

Understanding Microsoft Storage Spaces

Worry-free Storage. E-Series Simple SAN Storage

How To Create A Multi Disk Raid

Definition of RAID Levels

Lessons learned from parallel file system operation

How To Virtualize A Storage Area Network (San) With Virtualization

Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview

Seagate Cloud Systems & Solutions

RAID: Redundant Arrays of Independent Disks

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Storage node capacity in RAID0 is equal to the sum total capacity of all disks in the storage node.

Comparing Dynamic Disk Pools (DDP) with RAID-6 using IOR

Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre

Protecting Information in a Smarter Data Center with the Performance of Flash

ANY SURVEILLANCE, ANYWHERE, ANYTIME

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything

Next Generation HPC Storage Initiative. Torben Kling Petersen, PhD Lead Field Architect - HPC

The Benefit of Migrating from 4Gb to 8Gb Fibre Channel

Technical White paper RAID Protection and Drive Failure Fast Recovery

Factors rebuilding a degraded RAID

QuickSpecs. HP Smart Array 5312 Controller. Overview

Architecting a High Performance Storage System

Top Ten Questions. to Ask Your Primary Storage Provider About Their Data Efficiency. May Copyright 2014 Permabit Technology Corporation

GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"

Price/performance Modern Memory Hierarchy

Lecture 36: Chapter 6

1 Storage Devices Summary

Quick Reference Selling Guide for Intel Lustre Solutions Overview

Dynamic Disk Pools Technical Report

Solution Brief July All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux

Make the Most of Big Data to Drive Innovation Through Reseach

Reliability and Fault Tolerance in Storage

Disaster Recovery Solution Achieved by EXPRESSCLUSTER

Identifying the Hidden Risk of Data Deduplication: How the HYDRAstor TM Solution Proactively Solves the Problem

Cluster Implementation and Management; Scheduling

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

THE EMC ISILON STORY. Big Data In The Enterprise. Copyright 2012 EMC Corporation. All rights reserved.

Intel RAID SSD Cache Controller RCS25ZB040

How To Make A Backup System More Efficient

Evaluation of Enterprise Data Protection using SEP Software

NSS Volume Data Recovery

3PAR Fast RAID: High Performance Without Compromise

Application Performance for High Performance Computing Environments

RAID for the 21st Century. A White Paper Prepared for Panasas October 2007

Configuring RAID for Optimal Performance

Models Smart Array 6402A/128 Controller 3X-KZPEC-BF Smart Array 6404A/256 two 2 channel Controllers

RAID Basics Training Guide

HP reference configuration for entry-level SAS Grid Manager solutions

VMware Virtual Machine File System: Technical Overview and Best Practices

IPRO ecapture Performance Report using BlueArc Titan Network Storage System

Scale and Availability Considerations for Cluster File Systems. David Noy, Symantec Corporation

Nimble Storage for VMware View VDI

Oracle Database 10g: Performance Tuning 12-1

Server-Side Virtual Controller Technology (SVCT)

CS161: Operating Systems

Highly-Available Distributed Storage. UF HPC Center Research Computing University of Florida

Understanding Storage Virtualization of Infortrend ESVA

High Availability with Windows Server 2012 Release Candidate

IBM System x GPFS Storage Server

How To Get A Storage And Data Protection Solution For Virtualization

PowerVault MD1200/MD1220 Storage Solution Guide for Applications

HPC Advisory Council

An Oracle White Paper November Backup and Recovery with Oracle s Sun ZFS Storage Appliances and Oracle Recovery Manager

IBM ^ xseries ServeRAID Technology

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

POSIX and Object Distributed Storage Systems

OceanStor UDS Massive Storage System Technical White Paper Reliability

EXPLORATION TECHNOLOGY REQUIRES A RADICAL CHANGE IN DATA ANALYSIS

The Effect of Priorities on LUN Management Operations

Performance Impact on Exchange Latencies During EMC CLARiiON CX4 RAID Rebuild and Rebalancing Processes

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

The BIG Data Era has. your storage! Bratislava, Slovakia, 21st March 2013

Transcription:

Xyratex Update Michael K. Connolly Partner and Alliances Development

Is Now 2

The Continued Power of Xyratex Global Solutions Provider of High Quality Data Storage Hardware, Software and Services Broad data storage solutions expertise Storage Media, Storage Platforms, Clustered File Systems 370 storage related patents Solving complex technical data storage requirements ClusterStor Scale-Out Data Storage Solutions OneStor OEM Storage & Capital Equipment

To Xyratex: What Does this Mean? Why Did Seagate Purchase Xyratex? Xyratex Brand and Structure Significant Growth Opportunities More Resources to Support and Serve Customers Complimentary and Expanded Products and Solutions Access to Emerging and Innovative Technologies + 4

What Does this Mean? To the Community: Continued Commitment to Support and Solution Development around Lustre Xyratex Renewed OpenSFS Promoter Level Membership Seagate is Supportive of Community Participation Focus on Collaboration to Yield the Highest Performance, Stability, Reliability, and Robustness of Lustre Upcoming and Exciting Community Plans *Lustre is a registered trademark of Xyratex Technology, Ltd., in the United States. 5

Some Events Completed and Planned in 2014 March April May June Rice Oil & Gas ISUM HPC AC HPC OSL TES Ft. Meade HPC for Wall St. GEOINT 2014 LUG 2014 Bio-IT 2014 CUG 2014 EAGE 2014 HPC-AC at ISC ISC 2014 HPCS Canada

Growth and Accomplishments From 2013 through 2014: Great Year for ClusterStor and Lustre Awarded: Vendor to Watch! Grew revenue significantly with many new customers in multiple verticals Hired many new employees in various support and dev disciplines Participated at vast number of community and industry events Xyratex contributed Lustre Improvements in version 2.5: 4 MB I/Os, Network Request Scheduler, Metadata performance + over 240 bugs fixed New ClusterStor Solutions and Features Launched ClusterStor 1500 and 9000 versions New CS OS software release 1.4 with new multiple features Planned integration of Lustre 2.5 within 2014 On the Horizon.? *Lustre is a registered trademark of Xyratex Technology, Ltd., in the United States. 7

The New ClusterStor TM 9000 Engineered Solution for HPC and Big Data Now 50% faster than previous generation Up to 9 GB/s per 5U-84 Scalable Storage Unit ClusterStor TM OS version 1.4 with New Features Leverages latest Lustre capabilities Engineered solution optimized for End-to-End Speed Enterprise Bullet Proof Reliability Clear Industry Leading Efficiency by Wide Margin

New in Version 1.4: GridRAID (formerly called PDRAID) available in mid 2014 New and Improved Monitoring Dashboard High level view into the entire storage Node Status File System Throughput Inventory ClusterStor OS Version 1.4 Top System Statistics The SSU+n feature, where the maximum value for n=3, whereby up to 3 Expansion Storage Units (ESUs) can be added to each SSU NIS GUI Support - added GUI support for configuring NIS as an option for Lustre users. 9

Perspective on Traditional RAID Parity Rebuild Disk Pool #1 Object Storage Targets Object Storage Server Parity Rebuild Disk Pool #2 Empty Spare Drive Parity Rebuild Disk Pool #3 Parity Rebuild Disk Pool #4 Traditional RAID rebuild time increasingly long for high capacity drives o Faster data protection and recovery technology required

ClusterStor GridRAID Object Storage Target Object Storage Server Parity Rebuild Disk Pool #1 ClusterStor GridRAID restores full data redundancy up to 400% faster than traditional RAID6 implementations Mandatory to effectively implement high capacity drives and solutions Consolidates 4 to1 reduction in Object Storage Targets Seamless ClusterStor RAID management

How does this Translate to Lustre? Traditional RAID 6 with 50MB/s rebuild rate of a 4TB Drive takes approximately 22 Hours to complete Rebuild Translates: A OST performance will degrade to 45% of peak performance under heavy I/O load during rebuild for 22 Hours Note 4 OSTs per OSS with RAID 6 GridRaid using the same 50MB/s, has a reconstruction time of the same 4TB Drive of 5.5 Hours to complete recovery Translates: A OST during reconstruction rate can have 0 impact or the same 45% impact performance, depending on what drives are used for the data stripe, for 5.5 Hours under heavy I/O load during recovery Note 1 OSTs per OSS with GridRaid

Benefits of GridRAID for Lustre Improved MTTR Repair >4X faster than RAID 6 @ 50MB/s per drive Distributed Recovery Workload Less disruption to client IO Overcomes the single drive bandwidth bottlenecks Rebalancing up to 4x Faster than Rebuild Lower system impact Reduced Lustre OST Configuration Reduces OST count by a factor of 4 4 Distributed Spare Volumes per SSU RAID 6 provides 2 Global Hot Spares per SSU

In Summary The Future Looks Bright for XYRATEX and LUSTRE 14

Thank You Michael_Connolly@xyratex.com