WHITEPAPER: Understanding Pillar Axiom Data Protection Options



Similar documents
SAN Conceptual and Design Basics

Optimizing Large Arrays with StoneFly Storage Concentrators

Use cases and best practices for HP StorageWorks P2000 G3 MSA FC/iSCSI Combo Controller

The Modern Virtualized Data Center

Implementing Storage Concentrator FailOver Clusters

IBM System Storage DS5020 Express

[VIRTUAL INFRASTRUCTURE STORAGE & PILLAR DATA SYSTEMS]

Xanadu 130. Business Class Storage Solution. 8G FC Host Connectivity and 6G SAS Backplane. 2U 12-Bay 3.5 Form Factor

How To Back Up A Computer To A Backup On A Hard Drive On A Microsoft Macbook (Or Ipad) With A Backup From A Flash Drive To A Flash Memory (Or A Flash) On A Flash (Or Macbook) On

Best Practices for Implementing iscsi Storage in a Virtual Server Environment

29/07/2010. Copyright 2010 Hewlett-Packard Development Company, L.P.

Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance.

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

June Blade.org 2009 ALL RIGHTS RESERVED

Xangati Storage Solution Brief. Optimizing Virtual Infrastructure Storage Systems with Xangati

Comparing Dynamic Disk Pools (DDP) with RAID-6 using IOR

Get Success in Passing Your Certification Exam at first attempt!

Using Multipathing Technology to Achieve a High Availability Solution

Guide to SATA Hard Disks Installation and RAID Configuration

Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000

VTrak SATA RAID Storage System

NEXLINK STABLEFLEX MODULAR SERVER

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment

Vicom Storage Virtualization Engine. Simple, scalable, cost-effective storage virtualization for the enterprise

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

White Paper. Enhancing Storage Performance and Investment Protection Through RAID Controller Spanning

RAID technology and IBM TotalStorage NAS products

EMC VNXe HIGH AVAILABILITY

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Xserve G5 Using the Hardware RAID PCI Card Instructions for using the software provided with the Hardware RAID PCI Card

Symantec FileStore N8300 Clustered NAS Storage System V100R002. Glossary. Issue 01 Date Symantec Corporation

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Configuring Celerra for Security Information Management with Network Intelligence s envision

FlexArray Virtualization

Guide to SATA Hard Disks Installation and RAID Configuration

Server and Storage Virtualization: A Complete Solution A SANRAD White Paper

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

The functionality and advantages of a high-availability file server system

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Getting Started With RAID

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe

Enhancements of ETERNUS DX / SF

The Benefit of Migrating from 4Gb to 8Gb Fibre Channel

Storage Design for High Capacity and Long Term Storage. DLF Spring Forum, Raleigh, NC May 6, Balancing Cost, Complexity, and Fault Tolerance

WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network

Nutanix Tech Note. Failure Analysis All Rights Reserved, Nutanix Corporation

High Availability and MetroCluster Configuration Guide For 7-Mode

Quantum StorNext. Product Brief: Distributed LAN Client

modular Storage Solutions MSS Series

Storage node capacity in RAID0 is equal to the sum total capacity of all disks in the storage node.

NetApp FAS2000 Series

How To Write An Article On An Hp Appsystem For Spera Hana

Violin Memory Arrays With IBM System Storage SAN Volume Control

Microsoft Exchange 2010 on Dell Systems. Simple Distributed Configurations

Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips

VCE Vblock System 340 Gen 3.2 Architecture Overview

EMC Unified Storage for Microsoft SQL Server 2008

HP iscsi storage for small and midsize businesses

RAID Level Descriptions. RAID 0 (Striping)

IP SAN Best Practices

Business-centric Storage for small and medium-sized enterprises. How ETERNUS DX powered by Intel Xeon processors improves data management

RAID. Contents. Definition and Use of the Different RAID Levels. The different RAID levels: Definition Cost / Efficiency Reliability Performance

NetApp Software. SANtricity Storage Manager Concepts for Version NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

Synology High Availability (SHA)

EMC Business Continuity for Microsoft SQL Server 2008

How To Virtualize A Storage Area Network (San) With Virtualization

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Building the Virtual Information Infrastructure

NIVEO Network Attached Storage Series NNAS-D5 NNAS-R4. More information:

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

VERITAS Storage Foundation 4.3 for Windows

Business-centric Storage for small and medium-sized enterprises. How ETERNUS DX powered by Intel Xeon processors improves data management

Perforce with Network Appliance Storage

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

The safer, easier way to help you pass any IT exams. Exam : Storage Sales V2. Title : Version : Demo 1 / 5

E-Series. NetApp E-Series Storage Systems Mirroring Feature Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

Using High Availability Technologies Lesson 12

Definition of RAID Levels

Using RAID Admin and Disk Utility

Ultra-Scalable Storage Provides Low Cost Virtualization Solutions

Configuration Rules and Examples for FAS20xx Systems

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

broadberry.co.uk/storage-servers

Best Practices RAID Implementations for Snap Servers and JBOD Expansion

HP StorageWorks MSA2000

HP and Mimosa Systems A system for archiving, recovery, and storage optimization white paper

How To Make A Backup System More Efficient

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution

USER S GUIDE. MegaRAID SAS Software. April , Rev. D

Transcription:

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

Introduction This document gives an overview of the Pillar Data System Axiom RAID protection schemas. It does not delve into corner cases or exceptions on functionality outside of what is considered to be normal operations. On the Pillar Axiom storage system, data sets are protected with hardware RAID and striped across multiple disk drives for performance. These data sets can also be doubled for additional levels of data protection. NFS / CIFS Clients Management Clients Fibre Channel Clients iscsi Clients Customer LAN / WAN Management LAN / WAN Customer SAN Customer LAN / WAN Slammer NAS Controller(s) Management PilotController Policy Controller Slammer SAN Controller(s) Storage System Fabric Storage Bricks Axiom Hardware Components This section defines the various hardware components of Pillar Data Systems Axiom storage system. Pillar Axiom Pilot Policy Controller The Pilot is an out-of-band management system that directs and manages all system activity. It has no connection to the data path. It communicates with the rest of the Axiom system over the private Ethernet network. Each Axiom system has one Pilot, which includes a Graphical User Interface (GUI) for user configuration and management of the Axiom system; a Command Line Interface (CLI) used in scripts for user configuration and management of the Axiom system; two independent Control Units (CUs) that operate in active/passive mode. While the pilot contains redundant components, it is not considered a critical piece of the data protection puzzle. Pillar Axiom Slammer Storage Controller The Slammer provides an external interface to a storage network. It processes and manages every Input/Output (I/O) request. An Axiom system can include NAS and SAN Slammers. SAN slammers can be either Fibre Channel (FC), iscsi, or both. The current architecture allows up to two Slammers per system with future plans for expansion up to four. 2 P I L L A R D A T A S Y S T E M S

To Storage Network (GB Ethernet or FC) Control Unit To Pillar Axiom Brick Storage Enclosures and Other Slammer CU s A Slammer contains two CUs that function as an active-active pair, and are configured identically. The primary components of a CU are a pair of network interfaces (GbE for NAS, iscsi SAN, or FC for SAN), CPU and memory resources, and an integrated FC switch. Slammers are connected to the system storage pool through eight Fibre Channel ports on imbedded Fibre Channel loop switches. Bricks connect to Slammers through a switched Fibre Channel fabric. This is a private Fibre Channel fabric called the System Storage Fabric or SSF. The Slammers then access and virtualize the storage pool, so you can easily increase the number and size of file systems and LUNs. A primary job of the Slammer is to store I/Os from hosts in cache and then de-stage them to the backend disk pool which is composed of multiple Bricks. Each segment of data written by the Slammer is striped across multiple drives in the intelligent storage enclosure called Bricks. Pillar Axiom Brick Storage Enclosure There are two types of Bricks: SATA Bricks and FC Bricks. For the purpose of this paper we will discuss the SATA bricks, however FC bricks have similar properties. SATA Bricks have 13 disk drives where the 13th drive is used as a hot spare for automatic failover. The remaining 12 disks are pre-configured as two 6 drive RAID-5 groups (or disk arrays) with each controlled by the primary controller as well as the secondary backup controller. The Brick protects data from drive failures. Drive rebuilds for current drive technologies are under 4 hours, eliminating the need for more costly RAID protection like RAID 6 or RAID 1+0 (also called RAID 10). The Brick is fully redundant internally. Each hot-swappable RAID controller acts as the primary controller for one of the two RAID-5 arrays and has a path to each one of the disk drives inside the Brick. If one controller fails, the surviving controller continues to process I/Os for both RAID arrays. The current architecture supports up to 64 Bricks per system. P I L L A R D A T A S Y S T E M S 3

Understanding Pillar s RAID Options Now that we ve defined the basic components of an Axiom, we can discuss how Pillar s Standard and Double Protection levels protect customers critical information against data loss. Standard Protection Standard Protection, Pillar s default level of protection, is RAID 5+0 (also known as RAID 50). RAID 5 includes both parity and disk striping across multiple drives. The Disk Array is RAID 5. A Pillar virtual LUN is striped (RAID 0) across multiple disk arrays. RAID 50 provides increased write performance, improved data protection, and faster rebuild times even in the event of a drive failure. During a drive rebuild, system performance remains high because the other distributed RAID 5 arrays are functioning fully. Other architectures place a strain on the I/O controller to support RAID rebuilds, thus impacting overall performance and lengthening rebuild times. The figure below shows how data sent from the Slammer is distributed across two bricks which are comprised of four RAID 5 disk arrays for a total of 24 drives. Standard protection is typically striped over four disk arrays, so two SATA Bricks are required. A LUN or Volume that uses Standard Protection across two bricks can survive the simultaneous loss of four drives (one per RAID group) with no loss of data. Two of the failed drives (one per Brick) will rebuild, in parallel, in less than 4 hours to the dedicated hot-spare in each Brick. The disk arrays that successfully completed the rebuild process could then lose another drive each and still have no data loss. It would take a seventh drive failure to have data loss. A double drive failure in any of the disk arrays in a 4 hour window or a catastrophic failure of a Brick can cause data loss; both of these scenarios are extremely unlikely to happen. Stripe of data from Slammer A B C D Disk Array 1 Disk Array 2 Brick 1 A B Disk Array 3 Disk Array 4 Brick 2 C D 4 P I L L A R D A T A S Y S T E M S

Double Protection For enhanced data protection, Pillar offers Double Protection also known as RAID 50+1. With Double Protection, Pillar mirrors each stripe of data written to a disk array to another disk array on another Brick. Double protection requires a minimum of two Bricks. The figure below shows how the data from the Slammer will be written using Double Protection. Note how a copy of Stripe A is written to disk array 1 on Brick 1 while its mirrored copy is written to disk array 3 on Brick 2. Also note that with Double Protection, an entire Brick can fail and no data will be lost. Stripe of data from Slammer A B C D Disk Array 1 Disk Array 2 Brick 1 A 1 C 2 B 1 D 2 Disk Array 3 Disk Array 4 Brick 2 C 1 A 2 D 1 B 2 P I L L A R D A T A S Y S T E M S 5

Conclusion Losing two drives in the same disk array in less than 4 hours (the amount of time needed to rebuild a failed 500GB SATA drive) is extremely rare. With drives becoming more and more reliable and with Pillar continuing to decrease rebuild times, the likelihood of a double drive failure is becoming even more remote. The same is true with losing a complete Brick: It rarely, if ever, happens. Pillar allows end-users to set the protection level of a virtual LUN or Volume to survive a multiple drive failure or even a double Brick failure. Other vendors require the use of host-based RAID protection in combination with their controller-based protection, introducing more points of failure, more complexity, and more cost to provide these levels of protection. Pillar is the clear leader when it comes to protecting user s data. Pillar Data Systems takes a sensible, customer-centric approach to networked storage. We started with a simple, yet powerful idea: Build a successful storage company by creating value that others had promised, but never produced. At Pillar, we re delivering the most cost-effective, highly available networked storage solutions on the market. We build reliable, flexible solutions that, for the first time, seamlessly unite SAN with NAS and enable multiple tiers of storage on a single platform. In the end, we created an entirely new class of storage. www.pillardata.com 2008 Pillar Data Systems. All Rights Reserved. Pillar Data Systems, Pillar Axiom, AxiomONE and the Pillar logo are trademarks or registered trademarks of Pillar Data Systems. Other company and product names may be trademarks of their respective owners. Specifications are subject to change without notice. WP-DP-0308 6 P I L L A R D A T A S Y S T E M S