IBM PowerHA SystemMirror for i. Performance Information

Size: px
Start display at page:

Download "IBM PowerHA SystemMirror for i. Performance Information"

Transcription

1 IBM PowerHA SystemMirror for i Performance Information Version: 1.0 Last Updated: April 9, 2012

2 Table Of Contents 1 Introduction Geographic Mirroring General Performance Recommendations Source and Target Comparison CPU considerations Memory considerations Disk Subsystem System disk pool considerations Communications Lines Communication Transport Speeds Run-time Environment Delivery and Mode Sizing for optimum performance Monitoring the run-time environment Synchronization Partial and Full synchronizations Tracking Space Monitoring synchronization Calculating Full Synchronization Time Synchronization Priority Managing Contention between run-time and synchronization Metro Mirror and Global Mirror FlashCopy DS8000 FlashCopy SE SVC/V7000 Thin Provisioning

3 1 Introduction The primary focus of this document is to give recommendations for achieving the best performance possible for the various PowerHA SystemMirror technologies. The technologies included are geographic mirroring, Metro Mirror and Global Mirror, and FlashCopy. 2 Geographic Mirroring With geographic mirroring, IBM i does the replication. It is very important to consider performance when planning to implement a geographic mirroring solution. While asynchronous geographic mirroring does allow a bit more flexibililty regarding distance between systems, there are still implications to undersizing the source, target, or the communications line between the two. There are two separate aspects to consider when sizing for a geographic mirroring environment. During the normal run-time of the production environment, there will be some overhead added by the geographic mirroring as the IBM i operating system is sending disk writes to the target system. The second aspect is the overhead and time required for synchronization, when the target IASP is reconnected to the source IASP and changes are pushed from the source to the target to make the two equivalent again. 2.1 General Performance Recommendations Source and Target Comparison Geographic mirroring will consume resource on both the source and target resource. Especially for synchronous geographic mirroring, the best performance will be seen when the source and target systems are fairly equivalent in CPU, memory, and disk subsystem CPU considerations There is extra CPU and memory overhead required when doing geographic mirroring, both on the source and target system. There must be sufficient excess CPU capacity to handle this overhead, but there is no formula to calculate this exactly as it depends on many factors in the environment and the configuration. As a general rule, the source and target partitions used to run geographic mirroring need more than a partial processor. In a minimal CPU configuration, you can potentially see 5 20% CPU overhead while running geographic mirroring. The processor on the target system should be roughly equivalent to the processor on the source system. Undersizing the target system can affect run-time performance and also may not be acceptable in the event of a switchover or failover where production is now running on the target system. 3

4 2.1.3 Memory considerations Geographic mirroring also requires extra memory in the machine pool. For optimal performance of geographic mirroring, particularly during synchronization, increase the machine pool size by at least the amount given by the following formula and then use WRKSHRPOOL to set the machine pool size: Extra machine pool size = 300 MB + (0.3 * number of disk arms in the IASP) This extra machine pool storage is required on all nodes in the cluster resource group (CRG). It is important in the synchronization process on the target node, as well as when a switchover or failover occurs. NOTE: The machine pool storage size must be large enough before starting a resynchronization. Once the synchronization has started, increasing memory will not be utilized, and the synchronization could take a longer time. If the system value QPFRADJ is equal to 2 or 3, then the system might make changes to the storage pools automatically as needed. To prevent the performance adjuster function from reducing the machine pool size, set the machine pool minimum size (MINPCT parameter) to the calculated amount (the current size plus the extra size for geographic mirroring from the formula) by using the Work with Shared Storage Pools (WRKSHRPOOL) command or the Change Shared Storage Pool (CHGSHRPOOL) command Disk Subsystem Disk unit and IOA performance can affect overall geographic mirroring performance. The disk subsystem on the target side should be equivalent to that on the source side. It does not need to be identical, but should have around the same number of arms with the same performance characteristics, as well as equivalent IOA performance on both sides. IOA cache has been found to affect geographic mirroring performance. Performance will be best with a large amount of IOA cache available on both the source and target systems. When possible, the disk assigned to the IASP should be placed on a separate IO adapter from the SYSBAS disk to reduce any possible contention System disk pool considerations Similar to any system disk configuration, the number of disk units available to the application can have a significant affect on its performance. Putting additional workload on a limited number of disk units might result in longer disk waits and ultimately longer response times to the application. This is particularly important when it comes to temporary storage ina system configured with independent disk pools. All temporary storage is written to the SYSBAS disk pool. You must also remember tha the operating 4

5 system and basic functions occur in the SYSBAS disk pool. As a starting point, use the guidelines shown in the following table. Disk Arms in IASP Arms for SYSBAS: Divide IASP arms by: Less than Greater than 50 5 For example, if the IASP contains 10 drives, then SYSBAS should have at least 3. If the IASP contains 50 drives, then SYSBAS should have at least 10. You will want to monitor the percent busy of the SYSBAS disk arms in your environment to ensure that you have the appropriate number of arms. If utilization grows to over 40%, then more arms should be added Communications Lines When you are implementing a PowerHA solution using geographic mirroring, plan for adequate communication bandwidth so that the communication bandwidth does not become a performance bottleneck. Geographic mirroring can be used for virtually any distance. However, only you can determine the latency that is acceptable for your application. The type of networking equipment, the quality of service, the distance between nodes, the number and characteristics of data ports used can all affect the communications latency. As a result, these become additional factors that can impact geographic mirroring performance. To ensure better performance and availability, the following is recommended: To provide consistent response time, geographic mirroring should have its own redundant communication lines. Without dedicated communication lines, there might be contention with other services or applications that utilize the same communication line. Geographic mirroring supports up to four communication lines (data port lines), and a cluster heartbeat can be configured for up to two lines. It is important to know that a round-robin approach is used to send the data across the lines. This implies that for best performance, when multiple dataport lines are configured, they should have close to equivalent performance characteristics. If one slow line is added, then this will gate the sending of the data to that line speed. Geographic mirroring replication should also be run on a separate line from the cluster heartbeating line (the line associated with each node in the cluster). If the same line is used, during periods of heavy geographic mirroring traffic, heartbeating could fail, causing a false partition. From a high availability point of view, its is recommended to use different interfaces and routers connected to different network subnets for the four data ports that can be defined for geographic mirroring. It is better to install the Ethernet adapters in different expansion towers, using different hardware busses. Also, if you use multiport IO adapters, use different ports to connect the routers. If your configuration is such that multiple applications or services require the use of the same communication line, some of these problems can be alleviated by 5

6 implementing quality of service (QoS) through the TCP/IP functions of IBM i. The IBM I QoS solution enables the policies to request network priority and bandwidth for TCP/IP applications throughout the network. Ensure that throughput for each connection matches. Also, the speed and connection type should be the same for all connections between system pairs. If throughput is different, performance will be gated by the slowest connection. For example, a customer can have 1G Ethernet speed from their servers to the attached switches. However, if the connection is using a DS-3, then from site to site, they are utilizing a mbits/sec connection. Physical capacity is not throughput capacity. Older 10M, 100M Ethernet connections use earlier implementations of Carrier-Sense Media Access/Collision Detection (CSMA/CD). You should plan on no more than 30-35% throughput. As the network becomes more saturated, tere are more collisions, causing more retransmissions. This becomes a limitation on the data throughput, as opposed to the speed of the actual line. With newer implementations of 10M and 100M, the data throughput can vary from 20% to 70%, and it is again dependent on network saturation. Ensure that your connections are taking an appropriate route. You want to understand whether it is a circuit-switching protocol (like a T-1), and whether the connection goes directly from point A to point B or whether its routed through other switching offices. Size the communication bandwidth for both resynch and normal production in parallel. In a disaster situation, you may have a scenario where you have switched over to your target system and are running production. Then the original source system comes online and must be resynchronized. The full synchronization will be taking place in conjunction with normal runtime changes. The resynchronization could cause application performance degradation if the communications pipe is saturated Communication Transport Speeds Just how fast is a T1 line? A data T1 transfers information at about megabits every second, which translates to.193 MBps theoretical throughput. The absolute best that you can hope to get out of a T1 line is 70% effective throughput, and most network specialists say to plan to 30%. Therefore, the best that a T1 line can transfer is.135 MBps. If you have a 2 Gigabyte file to initially synch up, then that synch would take over 80 days. As you can see, most systems need more than a T1 line to achieve affective geographic mirroring throughput. T3 lines are a common aggregation of 28 T1 circuits that yield Mbps total network bandwidth or 5.5 MBps with a best effective throughput of 70%, which equals 3.9 MBps and a planning number of 2 MBps. The OC (the optical carrier fiber optic-based broadband network) speeds provide more bandwidth to achieve higher throughput rates. 6

7 The following table provides other communication line speeds. Type Raw speed (Mbps) Raw speed (MBps) 30% planning (MBps) GB/hour during synch T DS3/T OC OC OC OC OC OC OC OC OC Gb Ethernet local Run-time Environment Delivery and Mode When configuring geographic mirroring, there are two main parameters which affect geographic mirroring run-time performance. The DELIVERY parameter will affect the performance of disk writes to the IASP. With synchronous delivery, the disk write will not complete until the affected page in storage has also been received on the target system. Asynchronous delivery will allow the disk write on the source to complete once the write has been cached. The actual sending of the disk write to the target system happens outside the scope of the write on the source. For synchronous delivery, there is also a synchronous or asynchronous MODE. Synchronous mode ensures that the write has arrived at the disk cache on the target (essentially on disk at that point) before returning. Asynchronous mode only ensures that the write is on memory on the target. Synchronous delivery and synchronous mode guarantees equivalent copies of the IASP on source and target while geographic mirroring is active. It also provides the added protection of a crash-consistent copy of the data in case of a target system failure, since all writes will have been received into the disk subsystem. Synchronous delivery and asynchronous mode may be beneficial for customers running with a significantly slower disk subsystem on their target system. This will allow the disk write on the source to complete without waiting for the completion on the target. This delivery and mode will still guarantee equivalent data on the source and target IASPs in the case of a failure of the source system. 7

8 With synchronous delivery, it is very important to have the communications bandwidth available to support the number of disk writes at all peak periods thoughout the day or night. The overhead of sending the data to the target will be added to the time for each disk write to complete, which could significantly affect production performance. Even with a very fast line, if the distance between the source and target is too great, production performance will suffer. For this reason, asynchronous delivery for geographic mirroring was introduced in release 7.1. Asynchronous delivery is best for those environments where the source and target are separated by too great a distance for acceptable synchronous response times, or for scenarios where the bandwidth cannot support the peak write rate Sizing for optimum performance For best run-time performance, it is important to know the write volume within the IASP. We only consider writes because those are the only IO which are transferred to the target system. If the IASP has not yet been defined, the write volume in SYSBASE can be used as a rough estimate, understanding that this may result in excess communications capacity. Both the peak and average megabytes per second written should be collected, preferably over short intervals, such as 5 minutes. For synchronous delivery, the bandwidth of the communications line(s) must be able to keep up with the peak write volume. If it cannot keep up, the writes will begin to stack up and production performance will suffer. For asynchronous delivery, the bandwidth of the lines must still keep up at least to the average write volume. Since writes on the source are not waiting, it is acceptable for some queuing to occur, but if the line cannot handle the average write volume, then geographic mirroring will continue to get further and further behind. It also is important to examine the variance of the write rate over time. If there is a large variance between peak and average, then it may be advisable to size more for the peak. Undersizing in this case would affect the recovery point objective in the case of a source system failure during the peak write rate. To determine the megabytes of writes per second for each interval, run the performance tools during a representative and peak period. From the resulting QADMDSK file, use these parameters: DSBLKW number of blocks written: A block is one sector on the disk unit. PD (11,0). INTSEC elapsed interval seconds: The number of seconds since the last sample interval. PD (7,0). Then take the following steps: 1. Calculate the disk blocks written per second 8

9 Disk blocks written per interval divided by the number of seconds in the interval (QAPMDISK.QAPMDISK.DSBLKW / QAPMDISK.QAPMDISK.INTSEC) 2. Convert disk blocks to bytes. Multiply by 520 to get the number of bytes. 3. Divide by a million to get megabytes per second. 4. If using mirrored disks, divide by 2 to get geographic mirroring traffic. The formula to calculate the amount of traffic expressed as megabytes written per second is as follows: ((QAPMDISK.QAPMDISK.DSBLKW / QAPMDISK.QAPMDISK.INTSEC) * 520) / / 2 For example, if you determine that the amount of traffic is 5 MBps and you want to use geographic mirroring, then you need a pipe that can accommodate 5 MBps of data being transferred. If you are configuring two lines as data ports, then you need 2.5 MBps per line. From the table earlier, A DS3/T3 allows 5.6 MBps theoretical throughput with a 2 MBps with a best practice at 30% utilization. An OC-3 line allows MBps theoretical throughput with 6 MBps with a best practice at 30% utilization. You can initially start with two DS3 lines, but may need to upgrade to two OC-3 lines to plan for growth Monitoring the run-time environment When using asynchronous delivery, it may be useful to determine whether geographic mirroring is keeping up with disk writes. On the DSPASPSSN command on the source system, the Total data in transit field will give the amount of data in megabytes which has been sent to the target system, but not acknowledged as received. This field will only be shown when the transmission delivery is *ASYNCH and the state is ACTIVE. 2.3 Synchronization Partial and Full synchronizations When you suspend mirroring for any planned activities or maintenance, any changes made on the production copy of the independent disk pool arenot being transmitted to the mirror copy. So, when geographic mirroring is resumed, synchronization is required between the production and mirror copies. If geographic mirroring is suspended without tracking, then full synchronization occurs. This can be a lengthy process. If geographic mirroring is suspended with the tracking option, PowerHA will track changes up to the tracking space limit specified on the ASP 9

10 session. When mirroring is resumed,k the production and mirror copies are synchronized concurrently while geographic mirroring is running. Tracking is available on both the source side and the target side. Target side tracking greatly reduces the need for a full synchronization. Usually a full synchronization is only required when either the source or target IASP does not vary off normally, such as from a crash or an abnormal vary-off. While a synchronization is taking place, the environment is not highly available. This makes it essential to calculate the time required to do a full synchronization to understand whether the business can support that length of time exposed to an outage Tracking Space Tracking space is a reserved area within the IASP where the system tracks changed pages while geographic mirroring is not active which need to be synchronized when resuming mirroring. Tracking space is needed only when the target copy of the IASP is suspended, detached, or resuming. The changes themselves aren t contained within the tracking space, only a space-efficient indication of which pages require changes. The amount of tracking space allocated can be defined by the user. The maximum is 1% of the total space within the IASP. Using the CHGASPSSN command, a user can set the percentage of that 1%. For example, setting the field to 10% means that the tracking space would be 10% of 1% or.1% of the total IASP size. These parameters can be viewed using the DSPASPSSN command. Tracking Space Allocated is the percentage of the maximum (it would show 10% in the above example) and Tracking Space Used is the percentage of the available tracking space being used. If the Tracking Space Used reaches 100%, then no more changes can be tracked, and a full synchronization will be required Monitoring synchronization To track how much data is left to be synchronized, the DSPASPSSN command can be used on the source system. On the second screen, there are fields for Total data out of synch as well as Percent complete. These fields will display the megabytes of data which needs to be resynchronized and how far the synchronization has progressed. Both of these fields are updated as the synchronization runs. Each time a synchronization starts or is resumed, these fields will be reset. In the case of a resume, the percent complete will reset to 0, but you should also see a reduced total data out of synch Calculating Full Synchronization Time To determine the time needed for a full synchronization, divide the total space utilized in the IASP by the effective communications capability of the chosen communication lines. For example, if the IASP size is 900 GB and you are using 1 Gb Ethernet switches, then the synchronization time will be less than an hour. However, if you are using two T3/DS3 lines, each having an effective throughput of 7.2 GB/hour, it would take around 10

11 63 hours to do the synchronization. This was calculated by dividing the size of the IASP by the effective GB/hour, that is, 900 GB divided by 14 GBps. In most cases, the size of the data is used in the calculate, not the size of the IASP. An exception to this is a *NWSSTG in an IASP. An *NWSSTG object is treated as one file, so the size of the *NWSSTG is used instead of the amount of data within the *NWSSTG file. To compute the full synchronization time for an *NWSSTG in an IASP, divide the size of the network storage space of the IBm I hosted partition by the effective speed of the communications mechanism. For example, if the network storage space hosting IBM I was set up as 600 GB, it would take 42 hours to do the full synchronization using two DS3 lines. To improve the synchronization time, a compression device can be used Synchronization Priority Synchronization priority setting (low, medium, high) determines the amount of resources allocated to synchronization. Lower settings will gate synchronization which will also allow more resources to be allocated to non-synch work Managing Contention between run-time and synchronization Ideally, synchronization will run best when the system is quiet. However, most businesses cannot support this amount of quiesced time. Thus, synchronization will most likely be contenting for system resources with normal production workload, as well as the normal geographic mirroring run-time workload. For the least effect on production work, a synchronization priority of low can be selected. However, this will lengthen the amount of time required to complete the synchronization, also lengthening the amount of time without a viable target. 3 Metro Mirror and Global Mirror When using the Metro Mirror or Global Mirror technology within PowerHA, the overhead of replication is offloaded to the external storage device. However, the SAN infrastructure between the local and remote storage system plays a critical role in the speed of replication. Specifically, for Metro Mirror, which is synchronous replication, if the SAN bandwidth is too small to handle the traffic, then application write I/O response time will be affected. Use the following guidelines when calculating bandwidth required for external storage replication: 11

12 1. Assume 10 bits per byte for network overhead 2. If the compression of devices for remote links is known, it can be applied 3. Assume a maximum of 80% utilization of the network 4. Apply a 10% uplift factor to the result to account for peaks in the 5 minute intervals of collecting data, and a 20-25% uplift factor for 15 minute intervals. The following is an example using these guidelines: 1. The highest reported write rate from the IBM i is 40 MBps. 2. Assume 10 bits per byte for network overhead: 40 MBps * 1.25 = 50 MBps 3. Assume a maximum of 80% utilization of the network: 50 MBps * 1.25 = 62.5 MBps 4. Apply a 10% uplift for 5 minute intervals: 62.5 MBps * 1.1 = 69 MBps 5. The needed bandwidth is 69 MBps. A Recovery Point Objective (RPO) estimation tool is available for IBM and IBM Business Partners. This tool provides a method for estimating the RPO in a DS8000 Global Mirror environment in relation to the bandwidth available and other environmental factors (see Techdocs Document Id: PRS3246 for IBM and IBM Business Partners). 4 FlashCopy FlashCopy is another technology integrated into PowerHA. FlashCopy is a very fast point in time copy done using external storage. The flashed copy can be attached to another partition or system and used for backups, queries, or other work. With a FlashCopy space-efficient or thin-provisioned relationship, disk space will only be allocated for the target when a write changes a sector on the source, or when a write is directed to the target. For this reason, using Flashcopy SE requires less disk capacity than using standard FlashCopy, which can help lower the amount of physical storage needed. FlashCopy SE is designed for temporary copies, so Flashcopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship. If much more than 20% of the source is expected to change, there may be a trade-off in terms of performance versus space efficiency. Also, the copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity. 4.1 DS8000 FlashCopy SE The DS8000 FlashCopy SE repository is an object within an extent pool and provides the physical disk capacity that is reserved for space-efficient target volumes. When provisioning a repository, storage pol striping will autokmatically be used with a multi- 12

13 rank extent pool to balance the load across the available disks. RlashCopy SE is optikmized to work with repository extent pools consisting offour RAID arrays. In general, we recommend that the repository extent pool contain between one and eight RAID arrays. It is also important that adequate disk resources are configured to avoid creating a performance bottleneck. It is advisable to use the same disk speed or faster for the target repository as for the source volumes. We also recommend that the repository extent pool have as many disk drives as the source volumes. After the repository is defined in the extent pool, it cannot be expanded, so planning is important to ensure that it is configured to be large enough. If the repository becomes full, the Flashcopy SE relationships will fail. After the relationship fails, the target becomes unavailable for reads or writes, but the source volumes are not affected. You can estimate the physical space needed for e a repository by using historical performance data for the source volumes, along with knowledge of the duration of the FlashCopy SE relationship. In general, each write to a source volume consumes one track of space on the repository (57 KB for CKD, 64 KB for FB). Thus, the following calculation can be used to come up with a reasonable estimate: IO Rate * (% writes/100) * ((100-rewrite%)/100) * track size * duration in seconds * ((100+contingency%)) = repository capacity estimate in KB Because it is critical not to undersize the repository, a contingency factor or up to 50% is suggested. You can monitor that the repository has reached a threshold using Simple Network Management Protocol (SNMP) traps. You can set notification for any percentage of free repository space with a default notification at 15% free and 0% free. Using the Lab Services Advanced Copy Services Toolkit, you can convert and send these messages to the QSYSOPR message queue. 4.2 SVC/V7000 Thin Provisioning For SVC/V7000, when you are using a fully allocated source with the thin-rovisioned target, you need to disable the background copy and cleaning mode on the FlashCopy map by setting both the background copy rate and cleaning rate to zero. If these features are enabled, then the thin-rpvisioned volume will be either offline or as large as the source. You can select the grain size (32 KB, 64 KB, 128 KB, or 256 KB) for thinprovisioning. The grain size that you select affects the maximum virtual capacity for the thin-provisioned volume. If you select 32 KB for the grain size, the volume size cannot exceed 260,000 GB. The grain size cannot be changed after the thin-provisioned volume has been created. In general, smaller grain sizes save space and larger grain sizes produce better performance. For best performance, the grain size of the thin-provisioned volume must match the grain size of the FlashCopy mapping. However, if the grain sizes are different, the mapping still proceeds. 13

14 You can set the cache mode to readwrite for maximum performance when you create a thin-provisioned volume. Also, to prevent a thisprovisioned volume from using up capacity and getting offline, the autoexpand feature can be turned on. 14

Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays

Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays V Tsutomu Akasaka (Manuscript received July 5, 2005) This paper gives an overview of a storage-system remote copy function and the implementation

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Informix Dynamic Server May 2007. Availability Solutions with Informix Dynamic Server 11

Informix Dynamic Server May 2007. Availability Solutions with Informix Dynamic Server 11 Informix Dynamic Server May 2007 Availability Solutions with Informix Dynamic Server 11 1 Availability Solutions with IBM Informix Dynamic Server 11.10 Madison Pruet Ajay Gupta The addition of Multi-node

More information

EonStor DS remote replication feature guide

EonStor DS remote replication feature guide EonStor DS remote replication feature guide White paper Version: 1.0 Updated: Abstract: Remote replication on select EonStor DS storage systems offers strong defense against major disruption to IT continuity,

More information

SAN/iQ Remote Copy Networking Requirements OPEN iscsi SANs 1

SAN/iQ Remote Copy Networking Requirements OPEN iscsi SANs 1 SAN/iQ Remote Copy Networking Requirements OPEN iscsi SANs 1 Application Note: SAN/iQ Remote Copy Networking Requirements SAN/iQ Remote Copy provides the capability to take a point in time snapshot of

More information

Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server

Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance

More information

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication September 2002 IBM Storage Products Division Raleigh, NC http://www.storage.ibm.com Table of contents Introduction... 3 Key

More information

Analyzing IBM i Performance Metrics

Analyzing IBM i Performance Metrics WHITE PAPER Analyzing IBM i Performance Metrics The IBM i operating system is very good at supplying system administrators with built-in tools for security, database management, auditing, and journaling.

More information

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations

Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations Technical Product Management Team Endpoint Security Copyright 2007 All Rights Reserved Revision 6 Introduction This

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

The functionality and advantages of a high-availability file server system

The functionality and advantages of a high-availability file server system The functionality and advantages of a high-availability file server system This paper discusses the benefits of deploying a JMR SHARE High-Availability File Server System. Hardware and performance considerations

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID

More information

E-Series. NetApp E-Series Storage Systems Mirroring Feature Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.

E-Series. NetApp E-Series Storage Systems Mirroring Feature Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. E-Series NetApp E-Series Storage Systems Mirroring Feature Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)

More information

HP LeftHand SAN Solutions

HP LeftHand SAN Solutions HP LeftHand SAN Solutions Support Document Application Notes SAN/iQ Remote Copy Networking Requirements Legal Notices Warranty The only warranties for HP products and services are set forth in the express

More information

Hardware Configuration Guide

Hardware Configuration Guide Hardware Configuration Guide Contents Contents... 1 Annotation... 1 Factors to consider... 2 Machine Count... 2 Data Size... 2 Data Size Total... 2 Daily Backup Data Size... 2 Unique Data Percentage...

More information

Q & A From Hitachi Data Systems WebTech Presentation:

Q & A From Hitachi Data Systems WebTech Presentation: Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Distribution One Server Requirements

Distribution One Server Requirements Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and

More information

A SWOT ANALYSIS ON CISCO HIGH AVAILABILITY VIRTUALIZATION CLUSTERS DISASTER RECOVERY PLAN

A SWOT ANALYSIS ON CISCO HIGH AVAILABILITY VIRTUALIZATION CLUSTERS DISASTER RECOVERY PLAN A SWOT ANALYSIS ON CISCO HIGH AVAILABILITY VIRTUALIZATION CLUSTERS DISASTER RECOVERY PLAN Eman Al-Harbi 431920472@student.ksa.edu.sa Soha S. Zaghloul smekki@ksu.edu.sa Faculty of Computer and Information

More information

Optimizing LTO Backup Performance

Optimizing LTO Backup Performance Optimizing LTO Backup Performance July 19, 2011 Written by: Ash McCarty Contributors: Cedrick Burton Bob Dawson Vang Nguyen Richard Snook Table of Contents 1.0 Introduction... 3 2.0 Host System Configuration...

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Cisco Active Network Abstraction Gateway High Availability Solution

Cisco Active Network Abstraction Gateway High Availability Solution . Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

(Formerly Double-Take Backup)

(Formerly Double-Take Backup) (Formerly Double-Take Backup) An up-to-the-minute copy of branch office data and applications can keep a bad day from getting worse. Double-Take RecoverNow for Windows (formerly known as Double-Take Backup)

More information

FICON Extended Distance Solution (FEDS)

FICON Extended Distance Solution (FEDS) IBM ^ zseries Extended Distance Solution (FEDS) The Optimal Transport Solution for Backup and Recovery in a Metropolitan Network Author: Brian Fallon bfallon@us.ibm.com FEDS: The Optimal Transport Solution

More information

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes

More information

Deploying Riverbed wide-area data services in a LeftHand iscsi SAN Remote Disaster Recovery Solution

Deploying Riverbed wide-area data services in a LeftHand iscsi SAN Remote Disaster Recovery Solution Wide-area data services (WDS) Accelerating Remote Disaster Recovery Reduce Replication Windows and transfer times leveraging your existing WAN Deploying Riverbed wide-area data services in a LeftHand iscsi

More information

Understanding IBM Lotus Domino server clustering

Understanding IBM Lotus Domino server clustering Understanding IBM Lotus Domino server clustering Reetu Sharma Software Engineer, IBM Software Group Pune, India Ranjit Rai Software Engineer IBM Software Group Pune, India August 2009 Copyright International

More information

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 All trademark names are the property of their respective companies. This publication contains opinions of

More information

Microsoft Exchange Server 2003 Deployment Considerations

Microsoft Exchange Server 2003 Deployment Considerations Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers

More information

Violin Memory Arrays With IBM System Storage SAN Volume Control

Violin Memory Arrays With IBM System Storage SAN Volume Control Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This

More information

Managing your Domino Clusters

Managing your Domino Clusters Managing your Domino Clusters Kathleen McGivney President and chief technologist, Sakura Consulting www.sakuraconsulting.com Paul Mooney Senior Technical Architect, Bluewave Technology www.bluewave.ie

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (Fibre Channel/iSCSI) enables SAN tiering Balanced performance well-suited

More information

HRG Assessment: Stratus everrun Enterprise

HRG Assessment: Stratus everrun Enterprise HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at

More information

Using Multipathing Technology to Achieve a High Availability Solution

Using Multipathing Technology to Achieve a High Availability Solution Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend

More information

Best Practices for Implementing iscsi Storage in a Virtual Server Environment

Best Practices for Implementing iscsi Storage in a Virtual Server Environment white paper Best Practices for Implementing iscsi Storage in a Virtual Server Environment Server virtualization is becoming a no-brainer for any that runs more than one application on servers. Nowadays,

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Lecture 36: Chapter 6

Lecture 36: Chapter 6 Lecture 36: Chapter 6 Today s topic RAID 1 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for

More information

HA / DR Jargon Buster High Availability / Disaster Recovery

HA / DR Jargon Buster High Availability / Disaster Recovery HA / DR Jargon Buster High Availability / Disaster Recovery Welcome to Maxava s Jargon Buster. Your quick reference guide to Maxava HA and industry technical terms related to High Availability and Disaster

More information

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) IntelliMagic, Inc. 558 Silicon Drive Ste 101 Southlake, Texas 76092 USA Tel: 214-432-7920

More information

Performance and scalability of a large OLTP workload

Performance and scalability of a large OLTP workload Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............

More information

vsphere Monitoring and Performance

vsphere Monitoring and Performance vsphere 6.0 vcenter Server 6.0 ESXi 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more

More information

vsphere Monitoring and Performance

vsphere Monitoring and Performance vsphere 5.5 vcenter Server 5.5 ESXi 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more

More information

Disaster Recovery for Oracle Database

Disaster Recovery for Oracle Database Disaster Recovery for Oracle Database Zero Data Loss Recovery Appliance, Active Data Guard and Oracle GoldenGate ORACLE WHITE PAPER APRIL 2015 Overview Oracle Database provides three different approaches

More information

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.

More information

Capacity Planning Process Estimating the load Initial configuration

Capacity Planning Process Estimating the load Initial configuration Capacity Planning Any data warehouse solution will grow over time, sometimes quite dramatically. It is essential that the components of the solution (hardware, software, and database) are capable of supporting

More information

HP Smart Array Controllers and basic RAID performance factors

HP Smart Array Controllers and basic RAID performance factors Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array

More information

New Features in SANsymphony -V10 Storage Virtualization Software

New Features in SANsymphony -V10 Storage Virtualization Software New Features in SANsymphony -V10 Storage Virtualization Software Updated: May 28, 2014 Contents Introduction... 1 Virtual SAN Configurations (Pooling Direct-attached Storage on hosts)... 1 Scalability

More information

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2 vcenter Server Heartbeat 5.5 Update 2 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent

More information

Planning Domain Controller Capacity

Planning Domain Controller Capacity C H A P T E R 4 Planning Domain Controller Capacity Planning domain controller capacity helps you determine the appropriate number of domain controllers to place in each domain that is represented in a

More information

Availability Guide for Deploying SQL Server on VMware vsphere. August 2009

Availability Guide for Deploying SQL Server on VMware vsphere. August 2009 Availability Guide for Deploying SQL Server on VMware vsphere August 2009 Contents Introduction...1 SQL Server 2008 with vsphere and VMware HA/DRS...2 Log Shipping Availability Option...4 Database Mirroring...

More information

Monitoring DoubleTake Availability

Monitoring DoubleTake Availability Monitoring DoubleTake Availability eg Enterprise v6 Restricted Rights Legend The information contained in this document is confidential and subject to change without notice. No part of this document may

More information

High Availability Server Clustering Solutions

High Availability Server Clustering Solutions White Paper High vailability Server Clustering Solutions Extending the benefits of technology into the server arena Intel in Communications Contents Executive Summary 3 Extending Protection to Storage

More information

Windows Geo-Clustering: SQL Server

Windows Geo-Clustering: SQL Server Windows Geo-Clustering: SQL Server Edwin Sarmiento, Microsoft SQL Server MVP, Microsoft Certified Master Contents Introduction... 3 The Business Need for Geo-Clustering... 3 Single-location Clustering

More information

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

A virtual SAN for distributed multi-site environments

A virtual SAN for distributed multi-site environments Data sheet A virtual SAN for distributed multi-site environments What is StorMagic SvSAN? StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical

More information

vsphere Monitoring and Performance

vsphere Monitoring and Performance vsphere 5.1 vcenter Server 5.1 ESXi 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more

More information

be architected pool of servers reliability and

be architected pool of servers reliability and TECHNICAL WHITE PAPER GRIDSCALE DATABASE VIRTUALIZATION SOFTWARE FOR MICROSOFT SQL SERVER Typical enterprise applications are heavily reliant on the availability of data. Standard architectures of enterprise

More information

Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA

Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA Subtitle Table of contents Overview... 2 Key findings... 3 Solution

More information

Running a Workflow on a PowerCenter Grid

Running a Workflow on a PowerCenter Grid Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking Network Storage for Business Continuity and Disaster Recovery and Home Media White Paper Abstract Network storage is a complex IT discipline that includes a multitude of concepts and technologies, like

More information

High Availability Solutions for the MariaDB and MySQL Database

High Availability Solutions for the MariaDB and MySQL Database High Availability Solutions for the MariaDB and MySQL Database 1 Introduction This paper introduces recommendations and some of the solutions used to create an availability or high availability environment

More information

Optimizing Dell Compellent Remote Instant Replay with Silver Peak Replication Acceleration

Optimizing Dell Compellent Remote Instant Replay with Silver Peak Replication Acceleration Optimizing Dell Compellent Remote Instant Replay with Silver Peak Replication Acceleration A Dell Technical White Paper Abstract This technical report details the benefits that Silver Peak s replication

More information

WHITE PAPER BRENT WELCH NOVEMBER

WHITE PAPER BRENT WELCH NOVEMBER BACKUP WHITE PAPER BRENT WELCH NOVEMBER 2006 WHITE PAPER: BACKUP TABLE OF CONTENTS Backup Overview 3 Background on Backup Applications 3 Backup Illustration 4 Media Agents & Keeping Tape Drives Busy 5

More information

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper Disaster Recovery Solutions for Oracle Database Standard Edition RAC A Dbvisit White Paper Copyright 2011-2012 Dbvisit Software Limited. All Rights Reserved v2, Mar 2012 Contents Executive Summary... 1

More information

vsphere Monitoring and Performance

vsphere Monitoring and Performance Update 1 vsphere 5.1 vcenter Server 5.1 ESXi 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

FatPipe Networks www.fatpipeinc.com

FatPipe Networks www.fatpipeinc.com WARP WHITE PAPERS FatPipe WARP Reliability, Redundancy and Speed of Bi-directional IP Traffic The implementation of mission critical applications run over wide area networks as a basic communication tool

More information

POWER ALL GLOBAL FILE SYSTEM (PGFS)

POWER ALL GLOBAL FILE SYSTEM (PGFS) POWER ALL GLOBAL FILE SYSTEM (PGFS) Defining next generation of global storage grid Power All Networks Ltd. Technical Whitepaper April 2008, version 1.01 Table of Content 1. Introduction.. 3 2. Paradigm

More information

WHITE PAPER Optimizing Virtual Platform Disk Performance

WHITE PAPER Optimizing Virtual Platform Disk Performance WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower

More information

Deploying and Optimizing SQL Server for Virtual Machines

Deploying and Optimizing SQL Server for Virtual Machines Deploying and Optimizing SQL Server for Virtual Machines Deploying and Optimizing SQL Server for Virtual Machines Much has been written over the years regarding best practices for deploying Microsoft SQL

More information

Performance Monitoring AlwaysOn Availability Groups. Anthony E. Nocentino aen@centinosystems.com

Performance Monitoring AlwaysOn Availability Groups. Anthony E. Nocentino aen@centinosystems.com Performance Monitoring AlwaysOn Availability Groups Anthony E. Nocentino aen@centinosystems.com Anthony E. Nocentino Consultant and Trainer Founder and President of Centino Systems Specialize in system

More information

Using High Availability Technologies Lesson 12

Using High Availability Technologies Lesson 12 Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?

More information

Maximum Availability Architecture

Maximum Availability Architecture Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine Oracle Maximum Availability Architecture White Paper April 2010 Maximum Availability Architecture Oracle Best Practices For High Availability

More information

WHITE PAPER [MICROSOFT EXCHANGE 2007 WITH ETERNUS STORAGE] WHITE PAPER MICROSOFT EXCHANGE 2007 WITH ETERNUS STORAGE

WHITE PAPER [MICROSOFT EXCHANGE 2007 WITH ETERNUS STORAGE] WHITE PAPER MICROSOFT EXCHANGE 2007 WITH ETERNUS STORAGE WHITE PAPER MICROSOFT EXCHANGE 2007 WITH ETERNUS STORAGE ETERNUS STORAGE Table of Contents 1 SCOPE -------------------------------------------------------------------------------------------------------------------------

More information

Computer Networks. Definition of LAN. Connection of Network. Key Points of LAN. Lecture 06 Connecting Networks

Computer Networks. Definition of LAN. Connection of Network. Key Points of LAN. Lecture 06 Connecting Networks Computer Networks Lecture 06 Connecting Networks Kuang-hua Chen Department of Library and Information Science National Taiwan University Local Area Networks (LAN) 5 kilometer IEEE 802.3 Ethernet IEEE 802.4

More information

esxreplicator Contents

esxreplicator Contents esxreplicator Contents esxreplicator... 2 Release Notes... 2 Known Issues with this Release... 2 About esxreplicator... 4 Purpose... 4 What is meant by real-time?... 5 Can I Replicate Over a WAN Connection?...

More information

Virtual Storage Management for the Enterprise

Virtual Storage Management for the Enterprise Virtual Storage Management for the Enterprise 18 May 2010 Dave Brown Technical Account Manager dave.brown@datacore.com 206 660 7662 2010 DataCore Software Corp 1 Overview Reduce storage management costs

More information

The Methodology Behind the Dell SQL Server Advisor Tool

The Methodology Behind the Dell SQL Server Advisor Tool The Methodology Behind the Dell SQL Server Advisor Tool Database Solutions Engineering By Phani MV Dell Product Group October 2009 Executive Summary The Dell SQL Server Advisor is intended to perform capacity

More information

Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000

Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000 Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products that

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either

More information

Disaster Recovery Solution Achieved by EXPRESSCLUSTER

Disaster Recovery Solution Achieved by EXPRESSCLUSTER Disaster Recovery Solution Achieved by EXPRESSCLUSTER http://www.nec.com/expresscluster/ NEC Corporation System Software Division December, 2012 Contents 1. Clustering system and disaster recovery 2. Disaster

More information

Storage. The text highlighted in green in these slides contain external hyperlinks. 1 / 14

Storage. The text highlighted in green in these slides contain external hyperlinks. 1 / 14 Storage Compared to the performance parameters of the other components we have been studying, storage systems are much slower devices. Typical access times to rotating disk storage devices are in the millisecond

More information

Oracle Database 10g: Performance Tuning 12-1

Oracle Database 10g: Performance Tuning 12-1 Oracle Database 10g: Performance Tuning 12-1 Oracle Database 10g: Performance Tuning 12-2 I/O Architecture The Oracle database uses a logical storage container called a tablespace to store all permanent

More information

VERITAS Storage Foundation 4.3 for Windows

VERITAS Storage Foundation 4.3 for Windows DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications

More information

High Availability Essentials

High Availability Essentials High Availability Essentials Introduction Ascent Capture s High Availability Support feature consists of a number of independent components that, when deployed in a highly available computer system, result

More information

Dependable Systems. 9. Redundant arrays of. Prof. Dr. Miroslaw Malek. Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs

Dependable Systems. 9. Redundant arrays of. Prof. Dr. Miroslaw Malek. Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs Dependable Systems 9. Redundant arrays of inexpensive disks (RAID) Prof. Dr. Miroslaw Malek Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs Redundant Arrays of Inexpensive Disks (RAID) RAID is

More information

NetApp Software. SANtricity Storage Manager Concepts for Version 11.10. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.

NetApp Software. SANtricity Storage Manager Concepts for Version 11.10. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. NetApp Software SANtricity Storage Manager Concepts for Version 11.10 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1

More information

Hardware Performance Optimization and Tuning. Presenter: Tom Arakelian Assistant: Guy Ingalls

Hardware Performance Optimization and Tuning. Presenter: Tom Arakelian Assistant: Guy Ingalls Hardware Performance Optimization and Tuning Presenter: Tom Arakelian Assistant: Guy Ingalls Agenda Server Performance Server Reliability Why we need Performance Monitoring How to optimize server performance

More information

JOHNSON COUNTY COMMUNITY COLLEGE 12345 College Blvd., Overland Park, KS 66210 Ph. 913-469-3812 Fax 913-469-4429

JOHNSON COUNTY COMMUNITY COLLEGE 12345 College Blvd., Overland Park, KS 66210 Ph. 913-469-3812 Fax 913-469-4429 JOHNSON COUNTY COMMUNITY COLLEGE 12345 College Blvd., Overland Park, KS 66210 Ph. 913-469-3812 Fax 913-469-4429 ADDENDUM #1 September 21, 2015 REQUEST FOR PROPOSALS #16-033 FOR CLOUD BASED BACKUP & RECOVERY

More information

Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance.

Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance. EliteNAS Cluster Mirroring Option - Introduction Real Time NAS-to-NAS Mirroring & Auto-Failover Cluster Mirroring High-Availability & Data Redundancy Option for Business Continueity Typical Cluster Mirroring

More information