TECHNICAL WHITEPAPER: Disk-Based Backups. Backup-to-Disk Optimization The Pillar Approach



Similar documents
The Modern Virtualized Data Center

Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup

SAN Conceptual and Design Basics

DAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization

June Blade.org 2009 ALL RIGHTS RESERVED

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006

Optimizing Storage for Better TCO in Oracle Environments. Part 1: Management INFOSTOR. Executive Brief

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

How To Protect Data On Network Attached Storage (Nas) From Disaster

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

EMC Disk Library with EMC Data Domain Deployment Scenario

Using HP StoreOnce Backup systems for Oracle database backups

Deduplication has been around for several

Protecting enterprise servers with StoreOnce and CommVault Simpana

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise

Protect Microsoft Exchange databases, achieve long-term data retention

Technology Insight Series

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP

Data Deduplication: An Essential Component of your Data Protection Strategy

Technology Insight Series

VERITAS Backup Exec 9.0 for Windows Servers

Optimizing Large Arrays with StoneFly Storage Concentrators

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

Data center virtualization

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

IBM Global Technology Services November Successfully implementing a private storage cloud to help reduce total cost of ownership

an introduction to networked storage

IBM Tivoli Storage Manager

Quantum DXi6500 Family of Network-Attached Disk Backup Appliances with Deduplication

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

CISCO WIDE AREA APPLICATION SERVICES (WAAS) OPTIMIZATIONS FOR EMC AVAMAR

Keys to Successfully Architecting your DSI9000 Virtual Tape Library. By Chris Johnson Dynamic Solutions International

Using HP StoreOnce D2D systems for Microsoft SQL Server backups

NetVault Backup, NDMP and Network Attached Storage

Application Brief: Using Titan for MS SQL

The safer, easier way to help you pass any IT exams. Exam : Storage Sales V2. Title : Version : Demo 1 / 5

Unitrends Recovery-Series: Addressing Enterprise-Class Data Protection

Every organization has critical data that it can t live without. When a disaster strikes, how long can your business survive without access to its

DATA CENTER VIRTUALIZATION WHITE PAPER SEPTEMBER 2006

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

DeltaStor Data Deduplication: A Technical Review

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

EMC DATA DOMAIN OPERATING SYSTEM

EMC DATA DOMAIN OPERATING SYSTEM

Realizing the True Potential of Software-Defined Storage

How to Manage Critical Data Stored in Microsoft Exchange Server By Hitachi Data Systems

<Insert Picture Here> Refreshing Your Data Protection Environment with Next-Generation Architectures

NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Navisphere Quality of Service Manager (NQM) Applied Technology

Accelerating Backup/Restore with the Virtual Tape Library Configuration That Fits Your Environment

DEDUPLICATION BASICS

ATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG N POWERFUL RELEVANT PERFORMANCE SOLUTION CLO IRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

Disaster Recovery Strategies: Business Continuity through Remote Backup Replication

Sales Tool. Summary DXi Sales Messages November NOVEMBER ST00431-v06

WHITE PAPER: customize. Best Practice for NDMP Backup Veritas NetBackup. Paul Cummings. January Confidence in a connected world.

White Paper: Nasuni Cloud NAS. Nasuni Cloud NAS. Combining the Best of Cloud and On-premises Storage

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM

Maintaining Business Continuity with Disk-Based Backup and Recovery Solutions

NetApp and Microsoft Virtualization: Making Integrated Server and Storage Virtualization a Reality

HP and Mimosa Systems A system for archiving, recovery, and storage optimization white paper

Mosaic Technology s IT Director s Series: Exchange Data Management: Why Tape, Disk, and Archiving Fall Short

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation

SOLUTION BRIEF KEY CONSIDERATIONS FOR BACKUP AND RECOVERY

Riverbed Whitewater/Amazon Glacier ROI for Backup and Archiving

IBM System Storage DS5020 Express

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

IBM Global Technology Services September NAS systems scale out to meet growing storage demand.

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

White. Paper. Improving Backup Effectiveness and Cost-Efficiency with Deduplication. October, 2010

Big data management with IBM General Parallel File System

Hitachi NAS Platform and Hitachi Content Platform with ESRI Image

Modernize your data platform

IP Storage On-The-Road Seminar Series

How To Use An Npm On A Network Device

How To Make A Backup System More Efficient

Introduction. Silverton Consulting, Inc. StorInt Briefing

EMC PERSPECTIVE. An EMC Perspective on Data De-Duplication for Backup

Protect Data... in the Cloud

Disk-to-Disk Backup & Restore Application Note

Business white paper. environments. The top 5 challenges and solutions for backup and recovery

DXi Accent Technical Background

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at

Discover A New Path For Your Healthcare Data and Storage

Storage Backup and Disaster Recovery: Using New Technology to Develop Best Practices

Next Generation NAS: A market perspective on the recently introduced Snap Server 500 Series

Cost Effective Backup with Deduplication. Copyright 2009 EMC Corporation. All rights reserved.

SOLUTION BRIEF KEY CONSIDERATIONS FOR LONG-TERM, BULK STORAGE

WHITE PAPER. The Double-Edged Sword of Virtualization:

Dell PowerVault DL Backup to Disk Appliance Powered by CommVault. Backup-to-disk and recovery with deduplication

Tiered Data Protection Strategy Data Deduplication. Thomas Störr Sales Director Central Europe November 8, 2007

Bringing the edge to the data center a data protection strategy for small and midsize companies with remote offices. Business white paper

Mayur Dewaikar Sr. Product Manager Information Management Group Symantec Corporation

QLogic 2500 Series FC HBAs Accelerate Application Performance

The Secret to Affordably Protecting Critical Data

Transcription:

TECHNICAL WHITEPAPER: Disk-Based Backups Backup-to-Disk Optimization The Pillar Approach

Backup and recovery technologies have been around for many years. Software and hardware providers are continuously advancing the tools used to address data availability. Traditionally, tape was the medium of choice, but tape is prone to reliability and speed related issues. The amount of customer data is increasing exponentially, further straining the limitations of traditional tape methodologies. The major drawback to tape is that it is a serial access device, read and written to by mechanical means, prone to failure, labor intensive, and costly when you consider total Return-on-Investment (ROI) and risk of data loss. For years, tape manufacturers have been developing faster tape drives, different connection types (SCSI, Serial, and Fibre Channel) and began exploring how to use disk-based solutions, either as raw disk or as a Virtual Tape Library in their sub-systems. The objectives of this approach are twofold: first, to accomplish a reliable backup in minimal amount of time and second, to reduce the amount of time required to restore data or restore production to an operational point in time in the event of data loss or disaster. All backup and recovery methods should attempt to integrate with existing IT infrastructure and minimize the impact on production operations. Regulatory governance in many industries requires organizations to protect greater and greater amounts of data and guarantee timely access to that data. Data growth and heightened interest in corporate and IT governance, along with the cost of both, are forcing organizations to examine efficiencies across all aspects of IT operations. IT organizations have to increase efficiency and reduce costs. Decisions about backup/recovery applications, disk subsystems, disaster recovery practices, and IT staffing are all geared toward making data reliable and accessible at any point in time, with fewer IT personnel. This trend results in a host of storage and tape vendors attempting to fulfill the promise of doing more with less. The predominant answer to data growth, retention, and recovery pains is to use low-cost disk systems. The cost of ATA disks, coupled with software enhancements, makes disk-based solutions an appealing choice. Vendors have implemented several technologies: SAN disk, NAS disk, CAS disk, and Virtual Tape Libraries with a combination of both disk and tape. Unfortunately, what happened is that these traditional storage/tape vendors show customers best-practices that require them to do more with more. Islands of specialized storage or storage-to-tape sub-systems appeared in the data center. This can include separate storage for production applications, storage for backup and recovery, storage for backup staging, and storage for long-term archive of data. Along with all this storage is its the acquisition cost, maintenance cost, and extra IT staff to manage it: doing more with more. The Pillar Axiom disk-based storage subsystem delivers a sensible alternative. Pillar meets customer needs for application data and local or remote backup and recovery, and disaster recovery practices without creating separate storage islands and without putting a strain on IT infrastructure, personnel, or budgets. Pillar delivers the promise of doing more with less, fewer silos of specialized storage, reduced administrative costs, and better data management and availability.

Disk Technology: Understanding different disk storage methods Before we can discuss how to leverage the various disk technologies, we need to understand what is generally available and their advantages and disadvantages. Since IT technology is constantly changing, we cannot provide an exhaustive overview of disk technology, but we attempt to describe those disk technologies available for organizations considering backup-to-disk systems. JBOD/DAS The term for just a bunch of disks, or sometimes just a bunch of drives is a derogatory term - the official term is spanning - used to refer to a computer s hard disks that haven t been configured according to any RAID system for increased fault tolerance or improved data access performance. Direct Attached Storage (DAS) is sometimes referred to as JBOD but also references the internal drives of a computer or external arrays not connected to any network. JBOD/DAS advantages: Ideal for small environments with minimal complexity, where scalability and growth are not factors and where cost is a major concern. JBOD/DAS disadvantages: JBOD denotes a level of non-sophistication in the storage environment and may require higher levels of storage administration and expertise to achieve service levels to each application Network-Attached Storage (NAS) Comprised of both hard disks and management software, and it is completely dedicated to serving files over a company network running Gigabit Ethernet. It is based on standard network protocols such as TCP/IP, Fibre Channel (FC), and CIFS. NAS systems typically consist of RAID systems and software for configuring and mapping file locations to a network-attached device. Storage is shared across multiple servers. NAS advantages: NAS disk subsystems offer the ease of file sharing with various access control and security tools and a streamlined management interface. It supports multiple protocols for heterogeneous connectivity. NAS disk subsystems are specialized, dedicated systems that are typically easy to manage because they are configured exclusively for serving files. NAS systems are easily deployed in existing TCP/IP environments, making them well suited for any size deployment. NAS disadvantages: NAS disk subsystems are not well suited for certain applications such as high-end OLTP databases, complex disk-to-disk backup architectures, and streaming video. It s not that these applications won t work on NAS, but eventually performance limitations occur without a dedicated infrastructure. At this point, SANs are more suitable, because they have higher throughput capabilities. Most NAS disk sub-systems are closed-loop systems. They do not allow the ability to mix and match SAN and NAS in a single box. This limits the ability to effectively allocate storage based on application and performance needs. This is not the case with the Pillar Axiom system.

Storage area network (SAN) A high-speed special-purpose network or sub-network that interconnects different kinds of data storage device with associated data servers on behalf of a larger network of users. Typically, a SAN is part of the overall network of computing resources for an enterprise. SANs are usually clustered in close proximity to other computing resources but may also extend to remote locations for backup and archival storage using high-speed wide-area network carrier technologies such as ATM or SONET. A SAN is a connection method to storage, not actual storage, but many references to SANs today characterize the type of storage a customer may have in their environment. This storage is typically a Fibre-connected cache disk sub-system with multiple connections into the SAN and with large physical storage repositories on the back end. Today s SANs support disk mirroring and snapshots, backup and restore, archival and retrieval of archived data, data migration from one storage device to another, and data sharing among different servers in a network. SANs can incorporate sub-networks with NAS systems. Pillar offers a unique SAN-attached storage sub-system, breaking the traditional approach by enabling the deployment of different classes of storage in a SAN environment. SAN advantages: Higher disk access speeds, storage and data center consolidation, and higher backup and recovery speeds SAN disadvantages: Higher cost of implementation and maintenance, and greater administration complexity for larger environments limit the viability of SAN deployments in smaller organizations. Content-addressed storage (CAS) A method of providing fast access to fixed content (data that is not expected to be updated) by assigning it a permanent place on disk. CAS makes data retrieval straightforward by storing it in such a way that an object cannot be duplicated or modified once it has been stored. CAS advantages: A significant advantage of CAS is the fact that it minimizes storage space consumed by data backups and archives, preventing the overwhelming buildup of information that can be obsolete, redundant, or unnecessary. Another advantage is authentication. Because there is only one copy of an object, verifying its legitimacy is a simple process. CAS disadvantages: Most CAS disk subsystems require backup and compliance application vendors to write to the CAS vendor s API. This can lead to some disparity in the breadth or depth of support by those application vendors compared to what the CAS storage sub-system currently provides. Not all application vendors have written to CAS vendor APIs. Slow performance is another issue associated with CAS subsystems. Most CAS vendors recommend placing only aged data onto a CAS disk subsystem. Typically, you cannot re-deploy CAS storage to other types of applications in an environment such as databases or fileservers.

Virtual tape library (VTL) An archival storage technology that makes disk drives appear as physical tape devices to backup applications. VTL systems typically require minimal changes to backup applications. VTL advantages: Benefits of VTL systems include better backup and recovery times and lower operating costs. VTL technology can be used with a hierarchical storage management (HSM) system in which data is moved as it falls through various usage thresholds to slower but less costly forms of storage media. VTL systems may also be used as part of a SAN, where less frequently used or archived data can be managed by a single VTL server for many networked computers. A VTL system offloads from the main computer the processing involved in deciding whether data should be available in the faster disk cache or written onto a tape cartridge. The VTL system can manage data so that more of the space on a tape cartridge is actually used. VTL devices are also well suited for NDMP-type backups in a larger NAS filer environment. You get all the benefits of disk-to-disk backup with your normal filers without having to dedicate individual physical tape drives to each filer. VTL disadvantages: VTL systems require additional administrative tasks to manage them. The cost of VTL devices, including hardware, software, and additional licensing from the backup application is something to consider. Most VTL offerings are dedicated devices that do not allow the use of underlying storage for any other purpose. End users can end up trapping unused capacity in their VTL appliances. Single-instance storage (SIS) Disk Pioneered by Digital Equipment Corporation in the mid-1980s and optimized by Microsoft in the mid-1990s, SIS is a system s ability to keep one copy of content associated with multiple custodians. For user files, SIS disk systems create a hash value, or digital fingerprint, for the document body and metadata of each user file. Each user file is de-duplicated upon loading into the SIS disk subsystem with forensic references to the source file path and media. SIS storage is similar to CAS storage, but typically does not require application vendors to write to a storage API. Many SIS disk systems are used in conjunction with VTL technology. SIS disk advantages: SIS disk subsystems are accelerating the preference for disk over tape in backup and recovery systems. The SIS model typically reduces the amount of data stored in the SIS disk subsystem by 60 to 90 percent, reducing the cost of storage and increasing search speed. Many SIS disk subsystems perform a de-duplication at the sub-file level, maintaining the metadata of the file and the blocks within a file. Data reduction provides cost-effective storage density, even with traditional RAID disk protection, that was once dominated by tape systems. SIS disk subsystems are typically connected via TCP/IP, simplifying connectivity. SIS disk disadvantages: SIS disk subsystems are special-purpose devices that are not well suited for tasks such as file sharing or production application I/O. Most SIS disk sub-systems are TCP/IP-connected devices constrained by the bandwidth limitations of that technology. Because the SIS de-duplication process is done by an appliance or software running on a server, and such devices can constrain performance.

Disk-to-Disk Backups: Why most implementations fail to deliver the promise. Remember the promise: the ability to do more with less. The less includes less hardware, less impact, and less administrative resources to manage the process. It is the less part where disk-to-disk implementations tend to fail to deliver the promise. Usually, when customers deploy disk-to-disk solutions, they need to deploy more hardware and more administrative resources to manage the additional hardware or suffer the performance penalty of using or sharing a disk sub-system with production applications and having backup applications impede production I/O. Many storage vendors today have a similar configuration with respect to disk arrays. There are back-end disks behind backend controllers, behind some shared cache area, behind the front-end controllers that connect to a SAN fabric or point-to-point to the hosts directly. The cache region is there to service all host requests, regardless of the nature or type of I/O. It is useful to characterize the various I/O profiles. I/Os from an application or host are either random-read, random-write, sequential-read or sequential-write, or a mixture. They can also be large-block, small-block, or variable-block, or all of the above. Applications such as databases running an OLTP system have a tendency to be random read-write, small-blocktype I/O during production, but take on a different I/O profile during database loads or dumps. Applications such as Microsoft Exchange contain unstructured data types and can have random or sequential I/O (given certain file attachment types) and both large and small block I/O transfers. Applications such as backup and recovery tend to be sequential-write during backup and sequential-read during a restore. So what do all these I/O types have to do with storage and a common cache? Certain applications will benefit significantly from that cache, and others will suffer because of it. Cache is ideal for OLTP databases, but quickly consumed by large-block sequential-write applications, leaving little cache resources left for a database. In disk-to-disk backups, all disk-to-disk writes go through a disk sub-system, flooding the shared cache and causing problems for production applications such as an OLTP or Exchange, even though they are running on completely different boxes. Figure 1 illustrates how a common, shared cache can have this impact. Oracle OLTP Exchange FileShares Oracle Data Warehouse SAN Fabric Storage Controllers/ Host Entry Point Disk Cache Cache shared by all applications. One size fits all mentality. 6 Figure 1 Back-end Disk

To solve this problem, many hardware and software manufacturers recommend deployment of another storage platform to isolate the I/O from production. This approach works, but at a significant cost. It breaks the do more with less rule and forces customers into the do more with more trap! Pillar Data Systems: A Sensible Approach Pillar Data Systems delivers the true flexibility through choice that others promise, but have not achieved. Its innovative approach to Quality of Service (QoS) in the Pillar Axiom Storage system raises the bar in both price performance and flexibility. The Axiom system uses QoS technology to allocate system resources and handle every data flow according to its business priority. Through a simple management interface, IT managers can provision different classes of service within a single system, aligning storage resources with the business value of each data type and application. As the value of data changes over time, storage managers can reclassify and archive data at a lower QoS value for efficient use of storage resources. Pillar designed the Axiom system from the ground up with QoS in mind. Unlike systems that bolt on storage classification as an afterthought, QoS is native to the Pillar Axiom system. The result is a carefully balanced, easily tuned storage system that optimizes performance of each data class within a single array, without compromises. Storage managers gain agility to provision any mix of SAN LUNs or NAS file systems to conform to specific application requirements. That s why Pillar Axiom is the sensible storage alternative. True Storage Multi-tenancy Traditional networked storage enables managers to tune disk resources for optimal performance, but these systems are biased toward a single application or application I/O type. Where a traditional array supports multiple applications, managers cannot easily tune discrete segments for individual application requirements and must make do with an averaged disk setup where applications contend for resources, and no application receives performance priority. Such inflexibility makes both the capital and operational cost of traditional networked storage excessive. Especially true with backup to disk, where traditional storage arrays and the algorithms they used try to service production and backup I/O in the same fashion. Backup to disk I/O has a tendency to compete with and impact all other types of I/O on traditional storage sub-systems. Pillar QoS technology dramatically reduces the cost of storage and enables true multi-tenancy with the policydriven provisioning of disparate data types on a single disk. Disk physics complement data priority settings, ensuring performance and preferential handling of high-priority data. The outer segment of the disk, with the fastest linear velocity, accommodates high-throughput data rates, while the inner segment is suitable for archivelevel data (Figure 2). This architecture supports full disk utilization, reducing costs without sacrificing application performance. More importantly, you can now segment storage resources such as cache and queuing priority to mitigate the impact backup to disk may have on other production applications. High Medium Low Archive Figure 2

Advanced Queuing Mechanisms and Thread Allocation The Axiom system combines queuing and cache tuning to apply and enforce QoS policies for each file system and LUN. Traditional storage systems cannot distinguish incoming data according to business priority, so they process operations in the order they are received. While serial processing is adequate for storage systems that service a single application, data or workload type, virtualized storage pools today service many applications and types. When a storage system can prioritize incoming I/O requests according to their business value and associated QoS classification, it can ensure that business-critical operations are not interrupted by other business critical applications like backup and recovery or compliance. A powerful innovation unique to the Axiom system, the I/O Queuing feature in the Axiom Slammer Storage Controller, reviews every incoming request (or thread) and assigns it a priority after verifying the QoS value of its associated LUN or file system. Requests are then queued and sent in a weighted round-robin fashion with higher priority requests receiving a predominance of I/O access over lower priority requests. Axiom SAN Slammers establish multiple high-performance queues for each QoS priority setting. Axiom NAS Slammers also employ NFS-based thread priority schemes, assigning and allocating a pool of threads according to the priority level of a given file system. Default queue settings assign 50 percent of total I/O cycles to high priority, 35 percent to medium priority, 10 percent to low priority, and five percent to archivelevel data. Both SAN and NAS Slammers dynamically allocate CPU resources to serve the actual mix of requests. For example, if high-priority data does not require resources at a given point in time, those resources do not stand idle, but are used for lower-priority data in queue. This method optimizes resource utilization for the highest overall system performance. In deployments that use multiple Slammers, Axiom Brick Storage Enclosures use priority queue management to prevent interference from low-priority I/O from one Slammer with high-priority I/O from another Slammer. This prevents unnecessary disk head thrashing and any associated performance impact. Cache Management When it comes to data, one size does not fit all. Many traditional storage systems require operators to manually tune resources for optimal performance of specific applications. However, traditional SAN or NAS systems can only be optimized for one data profile, forcing storage managers into the inefficient, expensive position of devoting resources to a single application or making do with nonoptimized resources for many applications. QoS policy in the Axiom system automates physical configuration and tuning of cache resources. Each QoS policy dynamically sets volume-specific cache tuning for individual file systems and LUNs. No other storage system of its class offers this granularity of control. The Axiom system learns specific characteristics of the data it processes and fine-tunes cache operations. In each case, the file access type and I/O bias settings of the file system or LUN combine to dynamically allocate resources on the fly for maximum CPU and cache utilization in this manner: Sequential-Write minimal write cache Sequential-Read enables read-ahead algorithms Random-Write maximum write cache minimizes disk writes Random-Read reads small extents into memory 8

Leveraging QoS Pillar Axiom QoS technology gives enterprises the capability to allocate storage and configure performance profiles to meet recovery time objectives (RTOs). Consider the following scenarios and potential impact the Axiom QoS settings can have in a customer environment: Limited Backup Window The objective here is the least amount of impact on production applications during backup. A straightforward approach is having the backup application write to a LUN or NAS fileshare, setting Axiom QoS priority to write through cache, preserving cache resources to service production I/O requests. Sufficent Backup Windows When you have a backup window, you may want to leverage cache resources to improve backup-to-disk performance. Reducing the Recovery Window The RTO is the amount of time it takes to recover from data loss. With Axiom QoS technology, you can tune a LUN or filesystem for aggressive read-ahead and stage recovery information in the Axiom cache to enable rapid recovery. With information in Axiom cache, the recovery takes place with memory-to-memory transfers, which reduces RTO time. Flexibility Regardless of the initial settings for a particular LUN or filesystem, given the few scenarios mentioned above, administrators can change QoS priorities to facilitate an RTO as needed. For instance, setting a LUN or filesystem with write-through cache reduces impact on production volumes during backup. When a disaster occurs, the administrator can change the QoS level for an aggressive read-ahead on the cache to facilite a quicker RTO. Upon completed recovery, the administrator can reset the QoS values to their original levels. The Axiom allows the flexibility to make QoS changes permanent. The initial configuration of the LUNs or filesystems used for backup may not have been set up correctly or the environment may change, such as when backup windows grow smaller. The Axiom allows administrators to change the QoS setting and move that data within the Axiom to the various storage tiers. Multi-Tiering Backup When leveraging the flexibility of the Axiom QoS technology with multiple tiers of storage resources, administrators can create a backup-to-disk environment tuned for how the business needs to operate. Near-term data can be backed up to a higher level tier with different QoS settings. As data ages, it is moved to a lower tier area. QoS can be leveraged to provide a customized level of speed and protection for different data sets, depending on their level of importance. For example, this allows administrators to set up a higher level of performance and redundancy for core R&D code than for to Powerpoint files. As backup data is created, the likeliness of recovery for that data is higher at first, but as it ages on the backup disk, it no longer makes sense to use a higher tier of storage. Moving older data using a backup application to a lower tier becomes cost effective to optimize resource utilization. The Axiom system can do this on a single storage sub-system. 9

A Sensible Alternative Pillar Data Systems was founded to bring to market a completely new storage paradigm that delivers the promise of doing more with less. The complexities of providing tiered storage for production applications, while maintaining solid data availability best practices, have been increasingly difficult for storage administrators to manage. The reduction of backup windows and the ability to achieve rapid recovery when disaster occurs causes many administrators to choose disk-based solutions. That used to mean deployment of different storage silos, with different management interfaces and cost burdens. The Pillar Axiom Storage system enables cost control and delivers storage services to production and availability applications such as backup and recovery without competing for storage resources or affecting business critical applications. Pillar Axiom QoS technology facilitates the business drivers for production storage and backup/recovery applications with peaceful coexistence of both on a single system. Regardless of whether organizations deploy standard LUNs, filesystems, or VTL, Axiom QoS technology enables optimal storage for every tier at a competitive price. The Pillar Axiom storage system delivers the promise of doing more with less. Production Applications Server Provisioning Server Backup Server Desktop/Laptop Backup Remote Office Replication/ Backup/Discovery Recovery Compliance/ Archive Server Tape Library SAN/Network Infrastructure Email Database ERP FileShares Web Content Data Warehouse Imaging Point-in-time Copy Backup Data Snapshots Storage Pool for Remote Replication On-Line Backup Data Bare Metal Restores Shared Resource Trees Server Provisioning Images Production Applications Near-Line Backup High Priority: Allocate space in the outer 20% of drives and I/O Queue = 50% of service Medium Priority: Allocate space in 20% 60% from outer diameter of drives and I/O Queue = 35% of service Low Priority: Allocate space for 60% 80% from outer diameter of drives and I/O Queue = 10% of service Archive HSM/Compliance Data Older Backup Data Archive Backup and Compliance Data Archive Priority: Allocate space in inner diameter of drives and I/O Queue = 5% of service 10

Another benefit of using Pillar for disk backup is that, unlike other systems that are dedicated exclusively to disk backup, the Pillar Axiom is designed to concurrently support disk backup and other primary and secondary storage applications. As the capacity required for disk backup changes in each environment over time, capacity on the Axiom is never trapped in a single-purpose appliance, because it is easy to repurpose the disk pool for other hosted applications. Pillar is changing the way customers can deploy multi-tiered storage easily and cost-effectively. The Axiom enables tier-1 production applications to alongside lower-tier applications, maximizing the investment in a Pillar Axiom storage by delivering higher storage utilization, fewer burdens on administrative personnel with a single management interface, and properly sized application performance for business-critical applications. 11

Pillar Data Systems takes a sensible, customer-centric approach to networked storage. We started with a simple, yet powerful idea: Build a successful storage company by creating value that others had promised, but never produced. At Pillar, we re on a mission to deliver the most cost-effective, highly available networked storage solutions on the market. We build reliable, flexible solutions that, for the first time, seamlessly unite SAN with NAS and enable multiple tiers of storage on a single platform. In the end, we created an entirely new class of storage. www.pillardata.com 2006 Pillar Data Systems. All Rights Reserved. Pillar Data Systems, Axiom, and the Pillar logo are all trademarks of Pillar Data Systems. Other company and product names may be trademarks of their respective owners. Specifications are subject to change without notice. TWP-DBB-0506 12