Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server
|
|
|
- Madeline Hart
- 9 years ago
- Views:
Transcription
1 Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server
2 Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance Considerations...4 Calculating Performance...5 Measurable Elements...5 Data Factors...7 Test Results...7 Initial Image...7 Performance Tests...8 Scaling Tests...10 Proper System Sizing: Best Practices...12 Appendix A Sizing Process and Formulae...13 About Diogenes Analytical Laboratories, Inc....15
3 Executive Summary Continuous data protection (CDP) is an emerging technology that offers faster data restoration and more certain data protection than traditional backup and recovery (B/R) technologies. Traditional backup and recovery using magnetic tape technology generally is limited to a recovery time of 24 hours: If a failure occurs today, the most recent data available to restore was yesterday s. Thus, any data added or changed since yesterday will be lost. This would be referred to as a 24-hour recovery point objective (RPO). Therefore, up to 24 hours of data can be at risk. Data-replication methodologies have emerged to reduce the 24-hour loss window of risk associated with tape backup. One such approach, called mirroring, creates a complete replication of the data copied to a separate disk system. If the primary disk system fails, the secondary system can be mounted for continued processing. With a data mirror, complete data restoration can be accomplished in minutes rather than the hours that a tape system would require. But because mirrors double the amount of disk space required, most organizations will mirror the data no more than every eight hours. (Note that each mirror image incurs 100% disk space overhead.) With such a scheme, an eight hour RPO would be in place. Data snapshots, another replication technology, are more efficient than data mirrors in that they capture only changed data. Thus, the overhead is substantially less, typically 3% to 20% depending upon the rate of data change. Most organizations will implement snapshots every four hours, although some will do so every hour. With this scheme, the RPO can be as little as an hour. However, snapshots have a key limiting factor: The snapshot is stored on the primary disk system. If the primary system fails, the snapshot will be lost, as well. Continuous data protection does what its name implies: capture data changes as they occur. With CDP, RPOs can be reduced to an hour or less. Moreover, CDP data is stored on a disk system separate from the primary disk, so it will not be lost if the primary is lost as a snapshot would be. Because CDP captures only changed data, it does not incur the 100% disk space per copy overhead as mirror images do. Because CDP operates fundamentally different from traditional B/R, mirror and snapshot technologies, the stress points that it places on system architecture are somewhat different. To characterize these points and to quantify best practices, Symantec commissioned Diogenes Analytical Laboratories, Inc., to conduct appropriate lab tests. 1 We installed Backup Exec 10d in a variety of configurations and scenarios in order to stress the three main points of a CDP system: system CPU, network and disk subsystem. We also wanted to find out if there is a limit to the number of file servers that can be attached to a single CDP server. This document does not discuss the operation of Backup Exec 10d in detail, and it is assumed that the reader is generally familiar with that product and its operation. The principles of sizing a CDP system are similar to those of sizing a B/R system, except that tape drive speed is not a factor. From our testing, the key findings were: 1 Diogenes is an independent lab that does not endorse products. This document should not be construed as an endorsement nor as a statement that any technology or product is suitable for a particular purpose. Backup Exec 10d Page 3
4 1. A two-way Windows server as the CDP server offers ample capacity for the preponderance of situations. Scalability of CPU resources is unlikely to cause performance problems. 2. Either the network or disk subsystem is likely to be the limiting factor of performance. 3. Backup Exec 10d was able to back up 33 GB of data spread across 33 file servers in just 35 minutes without loss of functionality, even when 20 of the servers were connected using a 10/100 network. The third point is important because it measures the efficiency of the application. The test placed just a 20.4% load on the CPU. While it is possible to undersize any component, we believe that following the guidelines in this document will assure satisfactory performance for the preponderance of environments. System Sizing and Performance Considerations When an IT organization implements a Backup Exec 10d environment, two possible configurations are most common. As a best practice, CDP should be implemented as a supplement to B/R; no CDP product is generally considered a B/R replacement. IT organizations will continue to back data up to tape either for offsite vaulting or long-term data retention. Therefore, a media (backup) server is still necessary. One key question that we sought to answer in our lab tests is when to separate the media server from the CDP server. Figure 1 illustrates a consolidated configuration, in which the media server and the CDP server are running on the same server. For simplicity, we have not shown the connection to other systems, such as tape libraries, as they do not factor into our calculations. Client 1 LAN Client 2 Client 3 Media and CDP server with storage Figure 1 Consolidated configuration Client n The three potential bottlenecks of this configuration are the LAN bandwidth, the media/cdp server processing power, and the CDP storage I/O throughput. Backup Exec 10d Page 4
5 Figure 2 illustrates an extended configuration in which the media server and CDP server are separated. LAN Client 1 Client 2 Media server Client 3 Client n CDP server Figure 2 Extended configuration Obviously, the consolidated configuration yields a lower cost of ownership and is therefore preferable all other things being equal. When do best practices dictate that the two functions be separated? We will answer this in the succeeding sections. Calculating Performance Measurable Elements The elements that must be calculated for performance purposes include: 1. Total data to be protected per hour 2. CPU utilization 3. Network bandwidth 4. Disk I/O throughput Backup and recovery operations are often very CPU-intensive, consuming 60% - 80% of the CPU during peak times. In most organizations, backup activity is most intense during the evening and night hours. Figure 3 illustrates common CPU utilization over a normal backup cycle using traditional backup methods. The line in Figure 3 could also represent aggregate resource utilization, including the CPU, network and disk I/O. When IT organizations assess this total resource line, they should consider that free resources above the utilization line are available for other applications such Backup Exec 10d Page 5
6 as CDP. Clearly, the total resources or any single resource needed cannot exceed 100% without causing some jobs to fail. That is, the slowest resource becomes the weak link of the entire system. If the combined system requirements exceed 100% at the peak time, then additional resources are needed. If the CPU resources are exceeded, then separating the media server from the CDP server is the next step. 100% 50% Resource Utilization 8:00 pm 8:00 am 12:00 am Figure 3 Resource utilization over time in a typical B/R cycle When calculating resource availability, storage administrators must also keep restore operations in mind. Sufficient resources should be reserved to accommodate data restore operations. We recommend that data protection consume no more than 80% of the total available resources to allow for unexpected restore requirements. The first task in properly sizing a Backup Exec 10d implementation is to quantify the existing resource utilization. Thus far, our discussion has focused on system resources. However, the key factor is the amount of data that must be moved. After the initial image is created on the CDP server, Backup Exec 10d moves only new and changed data from the primary server. The calculation for the total amount of data is simple, as follows: Data change rate (GB) per hour + new data rate (GB) per hour = total data/hr Readers should note that Backup Exec 10d sends only changed data (blocks) except for small files. If a file is smaller than1 MB, then the entire file will be transmitted. Network metrics are easier to determine. Some sample network throughput values are as follows: 10/100 Ethernet: 1.0 MB/sec 10 MB/sec per file server Gigabit Ethernet (GbE): 100 MB/sec per file server Backup Exec 10d Page 6
7 These rates are theoretical maximums, and the actual throughput can be substantially less. Dedicated subnets will yield throughput closer to the maximum than networks that mix user data and backup data. A safe estimate is 70% of the theoretical maximum. As we noted earlier, B/R can consume a significant portion of the media server CPU. The actual amount will vary as a result of a number of factors, but those factors are not really material. It is better for an IT organization to measure the actual CPU utilization over a period of time to determine peak periods usage. Remember, it is not the average that is important but the peak value. CPU utilization is easily captured using Windows Performance Monitor (Perfmon). Perfmon will capture and graph CPU and memory utilization to help determine average and peak usage. Also, determining disk system throughput must be accomplished through actual monitoring and testing. With the wide variety of disk subsystems on the market, a theoretical calculation is not beneficial. Fortunately, open source tools are available to perform this task. We recommend using IOmeter ( Data Factors Another factor that impact system sizing and performance is the nature of the data. Curren tly, Backup Exec 10d Continuous Protection Server supports file system data, so it is not necessary to consider Exchange or SQL Server data for these tests. Within file systems, data can be characterized as predominantly small file (less than 1 MB), large file (greater than 1 MB) or mixed. From a CDP perspective, Backup Exec 10d indexes files as they are transferred to the CDP server to facilitate faster restore. This indexing requires both processing cycles as well as disk space. Given the measurable factors and the data factors, our tests set out to establish guidelines that IT organizations can use in sizing their Backup Exec 10d environment. Test Results Initial Image After installing the Backup Exec 10d software, our first task was to create an initial image of the primary data on the CDP server. Our first test, then, was to determine how much disk space should be allocated for this initial image. Because of the indexing, we did not expect a 1:1 ratio but needed to determine what the guidelines are. For our tests, we used three scenarios: Scenario 1: Many small files, deep tree structure Scenario 2: Mix of large and small files, moderate tree structure Scenario 3: Large files, flat tree structure Figure 4 indicates the results of our tests of these scenarios. Table 1 indicates how the different scenarios were created. Backup Exec 10d Page 7
8 Index File Overhead Percent Overhead Scenario A Scenario B Scenario C Run # Figure 4 Disk space overhead resulting from indexing Scenario A Scenario B Scenario C Run No. 1 Run No. 2 Run No. 3 Run No. 4 9 files per directory each 2KB 10 directories per 3 s deep 4,095 files per directory ranging from 2KB to 4MB 3 directories per 2 s deep 30 files per directory each 4MB 0 directories per 0 s deep 9 files per directory each 2KB 10 directories per 4 s deep 4,095 files per directory ranging from 2KB to 4MB 4 directories per 3 s deep 65 files per directory each 4MB 0 directories per 0 s deep 9 files per directory each 2KB 10 directories per 5 s deep 4,095 files per directory ranging from 2KB to 4MB 5 directories per 4 s deep 200 files per directory each 4MB 0 directories per 0 s deep Table 1 Description of how the test scenarios were created. 9 files per directory each 2KB 10 directories per 6 4,095 files per directory ranging from 2KB to 4MB 6 directories per 5 1,000 files per directory each 4MB 0 directories per 0 s As the graph indicates, additional disk space allocated for small file systems in a very deep tree structure can be as much as 42%, whereas 15% is sufficient for mixed file and just 5% for large files environments. Performance Tests To provide guidance to Backup Exec 10d users, we created five different scenarios. These included a variety of CPU, network and client configurations. Each of these scenarios and the results are described below. Readers should note that neither the total amount of data in the file system nor the percent of data changed is important; only the total amount of new and changed data impacts a CDP system. In the first two scenarios, we wanted to determine Backup Exec 10d Page 8
9 the limiting factor of the environment. Thus, our variable factors were the elements of the infrastructure. Scenario 1: Server: Dual processor 3.06 Ghz Xeon, 2 GB memory Clients: 3 file servers with 10 GB of changed data each (30 GB total) Network: 10/100 Ethernet Theoretical 10 MB/sec throughput (30 GB in 50 mins.) Disk subsystem: 1 Gbs Fibre Channel, RAID 0, 500 GB Theoretical 100 MB/sec throughput (30 GB in 5 mins.) Results: CPU Utilization: 13% Memory Utilization: 17% Total elapsed time: 2 hrs 34 mins. Network throughput: 3.8 MB/sec. From these results, we can see that the network throughput was substantially lower than the theoretical throughput. To determine the cause, we re-ran the test with just a single client. In this test, we achieved 10 MB/sec, the maximum. So, we ran the test a third time with two clients. In this case, network throughput was just 6.4 MB per second. Thus, it became clear that network performance deteriorated significantly with additional clients, probably because of network contention. In subsequent tests, we were able to connect 20 servers using a 10/100 network (see Scenario 5 below). If IT organizations experience performance degradation on a 10/100 network, it may be advisable to upgrade to newer networking infrastructure such as GbE. We did not experience any network degradation problems in later tests using GbE. The low CPU and memory utilization indicate that this configuration could be quite adequately served by a single-processor system. Scenario 2: Server: Quad processor 1.5 Ghz Xeon, 3.5 GB memory Clients: 6 file servers with 15 GB of changed data each (90 GB total) Network: Gigabit Ethernet Theoretical 100 MB/sec throughput Disk subsystem: SCSI Ultra 160, 700 GB RAID 1 Theoretical 160 MB/sec throughput Results: CPU Utilization: 13.3% Backup Exec 10d Page 9
10 Memory Utilization: 8% Disk throughput: 22.8 MB/sec Network throughput: 24 MB/sec Total elapsed time: 1 hrs 9 mins. In this scenario, the disk subsystem was our limiting factor. Although Ultra SCSI 160 supports 160 GB/sec channel speed, the combination of RAID, disk allocation and controller speeds results in substantially lower actual throughput. Running at 22.8MB/s the 90GB should have taken 1 hr and 6 minutes, while the actual time was 1 hour 9 minutes. To improve performance, we would have needed to use a faster disk system. It is also worth noting that the CPU utilization was only a small fraction of its capacity. A dual-processor system, and perhaps even a single processor system, would deliver similar results at a fraction of the cost of a quad processor. Scaling Tests After completing our performance tests, it was clear that we could use Backup Exec 10d to the maximum capacity of the disk or the network. So we next wanted to know to what extent the addition of more file servers to the environment would impact total performance. Adverse impact would manifest as either degraded disk performance as a result of disk contention (thrashing), or increased CPU/memory utilization. For these tests, we applied a mix of file types and sizes. The total amount of new and changed data ranged from 1 GB per file server to 24 GB per server. In most production situations, new and changed data are applied gradually during the course of business. However, applying large amounts of data (e.g., 24 GB) in a single event is referred to as a data bomb. Data bombs are a worst case scenario for CDP systems. And, that s exactly what we did to Backup Exec 10d. As a result of the data bomb, Backup Exec 10d s journaling system operation temporarily suspended because of journal system overflow. It is important to note that the jobs did not fail and that normal replication continued. However, rather than reduce functionality we chose to run the product at full capability and throttled the disk throughput to a maximum of 11.4 MB/sec. We should also note that we believe the data volumes we applied to the test exceed those that will be typical in a small to medium enterprise environment. IT organizations should evaluate their data volumes, as noted in our best practice recommendations at the end of this document, and determine whether to throttle disk performance or allow journaling to be temporarily suspended during peak periods. For the scaling tests, our infrastructure became the constant and the number of file servers became the variable. Our infrastructure included: Server: Quad processor 1.5 Ghz Xeon, 3.5 GB memory Network: Gigabit Ethernet Theoretical 10 MB/sec throughput Disk subsystem: Ultra SCSI 160, 700 GB RAID 1 Set to 11.4 MB/sec throughput Backup Exec 10d Page 10
11 Scenario 3: 8 file servers connected via GbE 22 GB of new/changed data per server Theoretical best time of 4 hrs. 8 mins. Results: CPU Utilization: 15.9% Memory Utilization: 8.4% Total elapsed time: 4 hrs 10 mins. The results do not indicate any adverse results from eight file servers. Scenario 4: 13 file servers connected via GbE 24 GB of new/changed data per server Theoretical best time of 7 hrs. 57 mins. Results: CPU Utilization: 16.0% Memory Utilization: 8.3% Total elapsed time: 7 hrs 59 mins. The results do not indicate any adverse results from 13 servers. Scenario 5: 13 file servers with connected via GbE 20 file servers with connected via 10/100 Ethernet 1 GB of new/changed data per server Theoretical best time of 32 mins. Results: CPU Utilization: 20.4% Memory Utilization: 8.5% Total elapsed time: 35 mins. Again, the results do not indicate any adverse results from 33 servers. The key issue in this scenario was possible processing overhead from simply connecting to and managing a larger number of file servers. Clearly, Backup Exec 10d easily scaled to handle 33 file servers. Backup Exec 10d Page 11
12 Proper System Sizing: Best Practices From the tests that we conducted, we recommend the following steps to properly sizing a Backup Exec 10d environment: 1. Calculate (or estimate if necessary) the combined peak data growth and change rate (referred to as total peak data). a. If the total amount of data exceeds 40 GB of data per hour, then consider adding an additional CDP server. [*Based on our tests, 33 GB were backed up in 35 minutes. However, performance was not linear, so this data rate cannot be extrapolated with absolute certainty.] 2. Measure existing infrastructure elements to determine the following metrics: a. Server CPU and memory during B/R operations (if a consolidated environment is anticipated) b. Network bandwidth (GB/hr) c. Disk I/O throughput 3. Divide the total peak data by the network bandwidth. a. If the total peak data cannot be moved across the network within 80% of the Recovery Point Objective, i.e. if the RPO is 1 hr, then the data should be able to be transferred within 48 minutes (in order to allow time for restores, resource contention, etc.), consider upgrading network resources. b. If your CDP data is operating on a 10/100 network and performance is insufficient, then consider upgrading to GbE. 4. Divide the total peak data by the disk I/O throughput a. If the total peak data cannot be moved from the disk within 80% of the RPO, consider upgrading the disk subsystem (e.g., faster disk array, spread the data across multiple arrays) 5. Separate the media server from the CDP server if: a. Peak CPU utilization during backup operations exceeds 60% b. Average CPU utilization during backup operations exceeds 50% c. Restore requirements are highly variable or unknown and media server availability must be assured 6. If any infrastructure element is insufficient, then it must be improved to the proper level. 7. We did not achieve an upper limit to the number of files servers that could be attached to a Backup Exec CDP server based on CPU overhead. The limiting factors are more likely to be data volume and its impact on network throughput and disk I/O throughput. 8. A dual processor CDP server should be sufficient for all but the largest configurations. Backup Exec 10d Page 12
13 Appendix A Sizing Process and Formulae Similar to sizing a backup environment, properly sizing a CDP system starts with determining the slowest performing component. With backup, the slowest component is typically either the tape drive or the network. CDP is similar in that the slowest component is most likely to be the network or the disk drive. The total amount of data that can be backed up is no more than the slowest component. The formula for calculating how much data can be backed up in a day is: Amount of Data (GB) = 24 hours * (X in GB/hr) where X is the transfer rate of either the disk drive or network whichever is the SLOWEST. However, using 24 hours is not realistic, as no time is allocated for restores. So an advisable 80% should be used instead or about 19 hours. Determining the speed is simple; IOMeter can be used to measure the transfer rate of both disk drives and network. However, if the CDP server is functioning as the tape backup (media) server, then other considerations must be factored. For this calculation, we are assuming that the CDP server will not be running at the same time as the backups. Readers should note that running both operations in parallel could impact performance significantly as the CDP server would be trying to write data to disk while the backup software is trying to read, resulting is disk I/O contention. Starting with: T c + T b = 19 hrs (remember the 80% rule above) Where T c is the time spent doing CDP and T b is the time spend doing backups. T c = Amount of Data (GB) / X (GB/hr) Where device speed is the same as above, the slower of the two transfer rates for disk or network and: T b = Amount of Data (GB) / Y (GB/hr) where Y is the slower of the two transfer rates for tape or network. Plugging these into our original equation, we get: Backup Exec 10d Page 13
14 Amount of Data (GB) / X(GB/hr) + Amount of Data(GB) / Y(GB/hr) = 19hrs Now, solving this equation for the amount of data we get: Amount of Data (GB) = 19 hrs (1 / X (GB/hr) + 1 / Y (GB/hr)) Again, IOmeter can be used to determine network xfer, disk xfer. For tape, 80% of the theoretical maximum is often a good rule of thumb. It is important to note that the amount of data is not the amount of storage the server has but rather the amount of data that is either new or changed. Backup Exec 10d Page 14
15 About Diogenes Analytical Laboratories, Inc. Diogenes Analytical Laboratories, Inc. is an independent organization dedicated to helping information technology buyers reduce the inherent risk and uncertainty associated with technology purchases. Our goal is to create an informed IT consumer and provide the complete information needed to make smart purchase decisions. This report is based on Diogenes Analytical Laboratories actual lab testing experiences. The opinions expressed in this report are those of Diogenes Analytical Laboratories, Inc. For more information, go to or call Diogenes Analytical Laboratories, Inc. All rights reserved. Neither this report nor any portion of it may be reproduced in any form without the express written consent of Diogenes Analytical Laboratories, Inc. The use of this report is governed by a separate Product and Services Use Agreement which may be viewed at Trademark notices: The Diogenes logo and flame are trademarks of Diogenes Analytical Laboratories, Inc. Backup Exec 10d is a trademark of Symantec Corp. Backup Exec 10d Page 15
Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup
Technical white paper Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup Table of contents Executive summary... 2 Introduction... 2 What is NDMP?... 2 Technology overview... 3 HP
Continuous Data Protection. PowerVault DL Backup to Disk Appliance
Continuous Data Protection PowerVault DL Backup to Disk Appliance Continuous Data Protection Current Situation The PowerVault DL Backup to Disk Appliance Powered by Symantec Backup Exec offers the industry
VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
Optimizing Large Arrays with StoneFly Storage Concentrators
Optimizing Large Arrays with StoneFly Storage Concentrators All trademark names are the property of their respective companies. This publication contains opinions of which are subject to change from time
Optimizing LTO Backup Performance
Optimizing LTO Backup Performance July 19, 2011 Written by: Ash McCarty Contributors: Cedrick Burton Bob Dawson Vang Nguyen Richard Snook Table of Contents 1.0 Introduction... 3 2.0 Host System Configuration...
How To Make A Backup System More Efficient
Identifying the Hidden Risk of Data De-duplication: How the HYDRAstor Solution Proactively Solves the Problem October, 2006 Introduction Data de-duplication has recently gained significant industry attention,
An Oracle White Paper September 2011. Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups
An Oracle White Paper September 2011 Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups Table of Contents Introduction... 3 Tape Backup Infrastructure Components... 4 Requirements...
Using Data Domain Storage with Symantec Enterprise Vault 8. White Paper. Michael McLaughlin Data Domain Technical Marketing
Using Data Domain Storage with Symantec Enterprise Vault 8 White Paper Michael McLaughlin Data Domain Technical Marketing Charles Arconi Cornerstone Technologies - Principal Consultant Data Domain, Inc.
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
RAID 5 rebuild performance in ProLiant
RAID 5 rebuild performance in ProLiant technology brief Abstract... 2 Overview of the RAID 5 rebuild process... 2 Estimating the mean-time-to-failure (MTTF)... 3 Factors affecting RAID 5 array rebuild
Visual UpTime Select Server Specifications
Visual UpTime Select Server Specifications Visual Networks offers two Visual UpTime Select server solutions to meet your organization s needs. Please click the link below to view the required hardware
HP reference configuration for entry-level SAS Grid Manager solutions
HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2
Eliminating Backup System Bottlenecks: Taking Your Existing Backup System to the Next Level. Jacob Farmer, CTO, Cambridge Computer
: Taking Your Existing Backup System to the Next Level Jacob Farmer, CTO, Cambridge Computer SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals
WHITE PAPER BRENT WELCH NOVEMBER
BACKUP WHITE PAPER BRENT WELCH NOVEMBER 2006 WHITE PAPER: BACKUP TABLE OF CONTENTS Backup Overview 3 Background on Backup Applications 3 Backup Illustration 4 Media Agents & Keeping Tape Drives Busy 5
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
Protect Microsoft Exchange databases, achieve long-term data retention
Technical white paper Protect Microsoft Exchange databases, achieve long-term data retention HP StoreOnce Backup systems, HP StoreOnce Catalyst, and Symantec NetBackup OpenStorage Table of contents Introduction...
IT@Intel. Comparing Multi-Core Processors for Server Virtualization
White Paper Intel Information Technology Computer Manufacturing Server Virtualization Comparing Multi-Core Processors for Server Virtualization Intel IT tested servers based on select Intel multi-core
NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION
NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION Network Appliance, Inc. March 2007 TABLE OF CONTENTS 1 INTRODUCTION... 3 2 BACKGROUND...
EonStor DS remote replication feature guide
EonStor DS remote replication feature guide White paper Version: 1.0 Updated: Abstract: Remote replication on select EonStor DS storage systems offers strong defense against major disruption to IT continuity,
WHITE PAPER Optimizing Virtual Platform Disk Performance
WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower
Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note
Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Internet SCSI (iscsi)
Using HP StoreOnce D2D systems for Microsoft SQL Server backups
Technical white paper Using HP StoreOnce D2D systems for Microsoft SQL Server backups Table of contents Executive summary 2 Introduction 2 Technology overview 2 HP StoreOnce D2D systems key features and
Accelerating Server Storage Performance on Lenovo ThinkServer
Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Backup and Recovery 1
Backup and Recovery What is a Backup? Backup is an additional copy of data that can be used for restore and recovery purposes. The Backup copy is used when the primary copy is lost or corrupted. This Backup
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
Virtualization, Business Continuation Plan & Disaster Recovery for EMS -By Ramanj Pamidi San Diego Gas & Electric
Virtualization, Business Continuation Plan & Disaster Recovery for EMS -By Ramanj Pamidi San Diego Gas & Electric 2001 San Diego Gas and Electric. All copyright and trademark rights reserved. Importance
Protecting Microsoft SQL Server with an Integrated Dell / CommVault Solution. Database Solutions Engineering
Protecting Microsoft SQL Server with an Integrated Dell / CommVault Solution Database Solutions Engineering By Subhashini Prem and Leena Kushwaha Dell Product Group March 2009 THIS WHITE PAPER IS FOR INFORMATIONAL
IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report
Performance Report Modular RAID for PRIMERGY
Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
SQL Server Business Intelligence on HP ProLiant DL785 Server
SQL Server Business Intelligence on HP ProLiant DL785 Server By Ajay Goyal www.scalabilityexperts.com Mike Fitzner Hewlett Packard www.hp.com Recommendations presented in this document should be thoroughly
RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29
RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for redundant data storage Provides fault tolerant
Exploring RAID Configurations
Exploring RAID Configurations J. Ryan Fishel Florida State University August 6, 2008 Abstract To address the limits of today s slow mechanical disks, we explored a number of data layouts to improve RAID
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
HP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
How To Create A Multi Disk Raid
Click on the diagram to see RAID 0 in action RAID Level 0 requires a minimum of 2 drives to implement RAID 0 implements a striped disk array, the data is broken down into blocks and each block is written
Protecting enterprise servers with StoreOnce and CommVault Simpana
Technical white paper Protecting enterprise servers with StoreOnce and CommVault Simpana HP StoreOnce Backup systems Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key
Veeam Backup & Replication Enterprise Plus Powered by Cisco UCS: Reliable Data Protection Designed for Virtualized Environments
Plus Powered by : Reliable Data Protection Designed for Virtualized Environments Solution Brief April 2015 Solution Highlights Extend backup for current Cisco, VMware, and Microsoft Hyper-V virtual machine
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha
ExaGrid - A Backup and Data Deduplication appliance
Detailed Product Description ExaGrid Backup Storage Appliances with Deduplication 2014 ExaGrid Systems, Inc. All rights reserved. Table of Contents Executive Summary...2 ExaGrid Basic Concept...2 ExaGrid
Backup architectures in the modern data center. Author: Edmond van As [email protected] Competa IT b.v.
Backup architectures in the modern data center. Author: Edmond van As [email protected] Competa IT b.v. Existing backup methods Most companies see an explosive growth in the amount of data that they have
VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5
Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.
BROCADE PERFORMANCE MANAGEMENT SOLUTIONS
Data Sheet BROCADE PERFORMANCE MANAGEMENT SOLUTIONS SOLUTIONS Managing and Optimizing the Performance of Mainframe Storage Environments HIGHLIGHTs Manage and optimize mainframe storage performance, while
Benchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
WHITE PAPER PPAPER. Symantec Backup Exec Quick Recovery & Off-Host Backup Solutions. for Microsoft Exchange Server 2003 & Microsoft SQL Server
WHITE PAPER PPAPER Symantec Backup Exec Quick Recovery & Off-Host Backup Solutions Symantec Backup Exec Quick Recovery & Off-Host Backup Solutions for Microsoft Exchange Server 2003 & Microsoft SQL Server
Qsan Document - White Paper. Performance Monitor Case Studies
Qsan Document - White Paper Performance Monitor Case Studies Version 1.0 November 2014 Copyright Copyright@2004~2014, Qsan Technology, Inc. All rights reserved. No part of this document may be reproduced
Identifying the Hidden Risk of Data Deduplication: How the HYDRAstor TM Solution Proactively Solves the Problem
Identifying the Hidden Risk of Data Deduplication: How the HYDRAstor TM Solution Proactively Solves the Problem Advanced Storage Products Group Table of Contents 1 - Introduction 2 Data Deduplication 3
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation report prepared under contract with Lenovo Executive Summary Love it or hate it, businesses rely on email. It
Microsoft Exchange Server 2003 Deployment Considerations
Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers
Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software
Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication September 2002 IBM Storage Products Division Raleigh, NC http://www.storage.ibm.com Table of contents Introduction... 3 Key
Navisphere Quality of Service Manager (NQM) Applied Technology
Applied Technology Abstract Navisphere Quality of Service Manager provides quality-of-service capabilities for CLARiiON storage systems. This white paper discusses the architecture of NQM and methods for
NDMP Backup of Dell EqualLogic FS Series NAS using CommVault Simpana
NDMP Backup of Dell EqualLogic FS Series NAS using CommVault Simpana A Dell EqualLogic Reference Architecture Dell Storage Engineering June 2013 Revisions Date January 2013 June 2013 Description Initial
Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database
Performance Advantages for Oracle Database At a Glance This Technical Brief illustrates that even for smaller online transaction processing (OLTP) databases, the Sun 8Gb/s Fibre Channel Host Bus Adapter
Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0
Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without
EMC DATA DOMAIN OPERATING SYSTEM
ESSENTIALS HIGH-SPEED, SCALABLE DEDUPLICATION Up to 58.7 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability DATA INVULNERABILITY ARCHITECTURE Inline write/read
All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.
WHITE PAPER All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.nl 1 Monolithic shared storage architectures
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
An Oracle White Paper November 2010. Backup and Recovery with Oracle s Sun ZFS Storage Appliances and Oracle Recovery Manager
An Oracle White Paper November 2010 Backup and Recovery with Oracle s Sun ZFS Storage Appliances and Oracle Recovery Manager Introduction...2 Oracle Backup and Recovery Solution Overview...3 Oracle Recovery
HPSA Agent Characterization
HPSA Agent Characterization Product HP Server Automation (SA) Functional Area Managed Server Agent Release 9.0 Page 1 HPSA Agent Characterization Quick Links High-Level Agent Characterization Summary...
Best Practices for Implementing iscsi Storage in a Virtual Server Environment
white paper Best Practices for Implementing iscsi Storage in a Virtual Server Environment Server virtualization is becoming a no-brainer for any that runs more than one application on servers. Nowadays,
The Methodology Behind the Dell SQL Server Advisor Tool
The Methodology Behind the Dell SQL Server Advisor Tool Database Solutions Engineering By Phani MV Dell Product Group October 2009 Executive Summary The Dell SQL Server Advisor is intended to perform capacity
7 Real Benefits of a Virtual Infrastructure
7 Real Benefits of a Virtual Infrastructure Dell September 2007 Even the best run IT shops face challenges. Many IT organizations find themselves with under-utilized servers and storage, yet they need
DAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization
DAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization New Drivers in Information Storage Data is unquestionably the lifeblood of today s digital organization. Storage solutions remain
Deduplication has been around for several
Demystifying Deduplication By Joe Colucci Kay Benaroch Deduplication holds the promise of efficient storage and bandwidth utilization, accelerated backup and recovery, reduced costs, and more. Understanding
Windows 8 SMB 2.2 File Sharing Performance
Windows 8 SMB 2.2 File Sharing Performance Abstract This paper provides a preliminary analysis of the performance capabilities of the Server Message Block (SMB) 2.2 file sharing protocol with 10 gigabit
How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)
Scalability Results Select the right hardware configuration for your organization to optimize performance Table of Contents Introduction... 1 Scalability... 2 Definition... 2 CPU and Memory Usage... 2
Deployments and Tests in an iscsi SAN
Deployments and Tests in an iscsi SAN SQL Server Technical Article Writer: Jerome Halmans, Microsoft Corp. Technical Reviewers: Eric Schott, EqualLogic, Inc. Kevin Farlee, Microsoft Corp. Darren Miller,
Backup Exec 2014: Deduplication Option
TECHNICAL BRIEF: BACKUP EXEC 2014: DEDUPLICATION OPTION........................................ Backup Exec 2014: Deduplication Option Who should read this paper Technical White Papers are designed to
EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi
EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either
Demystifying Deduplication for Backup with the Dell DR4000
Demystifying Deduplication for Backup with the Dell DR4000 This Dell Technical White Paper explains how deduplication with the DR4000 can help your organization save time, space, and money. John Bassett
Introduction to Data Protection: Backup to Tape, Disk and Beyond. Michael Fishman, EMC Corporation
Introduction to Data Protection: Backup to Tape, Disk and Beyond Michael Fishman, EMC Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE
WHITE PAPER BASICS OF DISK I/O PERFORMANCE WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE This technical documentation is aimed at the persons responsible for the disk I/O performance
Unprecedented Performance and Scalability Demonstrated For Meter Data Management:
Unprecedented Performance and Scalability Demonstrated For Meter Data Management: Ten Million Meters Scalable to One Hundred Million Meters For Five Billion Daily Meter Readings Performance testing results
Keys to Successfully Architecting your DSI9000 Virtual Tape Library. By Chris Johnson Dynamic Solutions International
Keys to Successfully Architecting your DSI9000 Virtual Tape Library By Chris Johnson Dynamic Solutions International July 2009 Section 1 Executive Summary Over the last twenty years the problem of data
Practical issues in DIY RAID Recovery
www.freeraidrecovery.com Practical issues in DIY RAID Recovery Based on years of technical support experience 2012 www.freeraidrecovery.com This guide is provided to supplement our ReclaiMe Free RAID Recovery
WHITE PAPER THE BENEFITS OF CONTINUOUS DATA PROTECTION. SYMANTEC Backup Exec 10d Continuous Protection Server
WHITE PAPER THE BENEFITS OF CONTINUOUS DATA PROTECTION SYMANTEC Backup Exec 10d Continuous Protection Server 1 TABLE OF CONTENTS EXECUTIVE SUMMARY...3 Current Situation...3 The New Opportunity...3 The
Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7
Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:
EMC DATA DOMAIN OPERATING SYSTEM
EMC DATA DOMAIN OPERATING SYSTEM Powering EMC Protection Storage ESSENTIALS High-Speed, Scalable Deduplication Up to 58.7 TB/hr performance Reduces requirements for backup storage by 10 to 30x and archive
Considerations when Choosing a Backup System for AFS
Considerations when Choosing a Backup System for AFS By Kristen J. Webb President and CTO Teradactyl LLC. October 21, 2005 The Andrew File System has a proven track record as a scalable and secure network
A Business Case for Disk Based Data Protection
Mosaic Technology s IT Director s Series: A Business Case for Disk Based Data Protection presented by Mosaic Technology Mosaic Technology Corporation * Salem, NH (603) 898-5966 * Bellevue, WA (425) 462-5004
Dell Microsoft Business Intelligence and Data Warehousing Reference Configuration Performance Results Phase III
White Paper Dell Microsoft Business Intelligence and Data Warehousing Reference Configuration Performance Results Phase III Performance of Microsoft SQL Server 2008 BI and D/W Solutions on Dell PowerEdge
Evaluation of Enterprise Data Protection using SEP Software
Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &
Integrated Grid Solutions. and Greenplum
EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving
HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads
HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads Gen9 Servers give more performance per dollar for your investment. Executive Summary Information Technology (IT) organizations face increasing
Understanding the Benefits of IBM SPSS Statistics Server
IBM SPSS Statistics Server Understanding the Benefits of IBM SPSS Statistics Server Contents: 1 Introduction 2 Performance 101: Understanding the drivers of better performance 3 Why performance is faster
Backup Exec 15: Deduplication Option
TECHNICAL BRIEF: BACKUP EXEC 15: DEDUPLICATION OPTION........................................ Backup Exec 15: Deduplication Option Who should read this paper Technical White Papers are designed to introduce
Delphi 2015 SP1-AP1 System Requirements
Delphi 2015 SP1-AP1 System Requirements Revision 1.2 Newmarket International Inc. July 24,2015 newmarketinc.com Copyright 2015 Newmarket International, Inc., an Amadeus company. All rights reserved. This
QLogic 2500 Series FC HBAs Accelerate Application Performance
White Paper QLogic 2500 Series FC HBAs Accelerate Application Performance QLogic 8Gb HBAs: Planning for Future Requirements 8Gb Performance Meets the Needs of Next Generation Data Centers Key Findings
HP and Mimosa Systems A system for email archiving, recovery, and storage optimization white paper
HP and Mimosa Systems A system for email archiving, recovery, and storage optimization white paper Mimosa NearPoint for Microsoft Exchange Server and HP StorageWorks 1510i Modular Smart Array Executive
Maximizing Backup and Restore Performance of Large Databases
Maximizing Backup and Restore Performance of Large Databases - 1 - Forward (from Meta Group) Most companies critical data is being stored within relational databases. Over 90% of all mission critical systems,
New!! - Higher performance for Windows and UNIX environments
New!! - Higher performance for Windows and UNIX environments The IBM TotalStorage Network Attached Storage Gateway 300 (NAS Gateway 300) is designed to act as a gateway between a storage area network (SAN)
