VERITAS NETBACKUP SOFTWARE PERFORMANCE TUNING
|
|
|
- Ella Peters
- 10 years ago
- Views:
Transcription
1 VERITAS NETBACKUP SOFTWARE PERFORMANCE TUNING George Winter October 23, 2003 VERITAS ARCHITECT NETWORK
2 TABLE OF CONTENTS Introduction... 3 Formal Benchmarks of VERITAS NetBackup Software Performance... 3 Understanding and Testing the Backup Infrastructure... 4 Tape Backup Recommendations... 4 Network Recommendations... 5 A Network Bottleneck Case Study...6 Storage Recommendations... 7 Tuning VERITAS NetBackup Software... 8 Capacity Planning...8 Sizing the NetBackup Media Server... 8 Memory usage... 8 Configuring Multiplexing and Multi-Streaming... 9 Compression Buffer Management NET_BUFFER_SZ SIZE_DATA_BUFFERS...11 NUMBER_DATA_BUFFERS NUMBER_DATA_BUFFERS_RESTORE Buffers and System Memory BPTM Log Contents Improving Restore Performance Disk-Based Storage Indexing the NetBackup Image Catalog Thousands of Small Backups Determining Read and Write Performance of Backups and Restores Log Files Java User Interface Performance Summary Appendix Operating System Settings Minimum Solaris Kernel Parameters TCP_CLOSE_WAIT_INTERVAL Additional Web Resources VERITAS ARCHITECT NETWORK 2
3 Introduction If you are an IT administrator, there is a good chance that improving backup performance ranks high on your list of priorities. Backup performance, after all, is one of the most common topics in calls to VERITAS technical support. But getting to the bottom of performance issues can be tricky. There are several interrelated hardware, software, and networking components in the backup environment, and each component can create bottlenecks that obstruct the smooth flow of data. Understanding the path that data takes from the storage subsystem to the backup medium is essential before determining what you can achieve by performance tuning. This article presents guidelines for tuning VERITAS NetBackup software performance, recommendations for testing the backup infrastructure, and information on formal benchmarks of NetBackup software performance. Formal Benchmarks of VERITAS NetBackup Software Performance Classifying the performance of a backup system as poor is often the result of unrealistic expectations. It is frequently difficult to assess performance realistically, so formal benchmarks can help shed light on this dilemma. Although the real world has a multitude of variables that cannot be reflected in a formal benchmark study, these tests are valuable for assessing the comparative performance of different products and demonstrating a high-water mark for backup performance. The values achieved in a benchmark help you determine how well VERITAS NetBackup software is performing in your environment. This is not to say that your environment will have the ability to achieve the same backup throughput; but if your performance assessment is significantly different from the benchmark results, it could indicate a performance bottleneck in the system. The Storage Networking Industry Association (SNIA) benchmark for backup processing uses NDMP and a standard file system without databases to measure a backup product s performance. VERITAS NetBackup software delivered backup performance of 1 TB/hour in the SNIA benchmark. In the Oracle database backup benchmark study previously reported in eweek magazine VERITAS NetBackup software performance was tested while backing up an Oracle 9i database. Using NetBackup 4.5 and performing a hot backup, VERITAS achieved a rate of 2.33 TB/hour. VERITAS ARCHITECT NETWORK 3
4 Understanding and Testing the Backup Infrastructure Every backup environment has a bottleneck. Data may move around the system very fast indeed, but somewhere in the system there is a gating element that prevents the system from running even faster. In order to characterize what a configuration is capable of, you must break down the infrastructure into its component parts. This enables you to determine the limits of each element and assess the theoretical limits of the configuration. The backup environment (see Figure 1) has three main components the storage subsystem, the network, and the backup medium (which is usually a magnetic tape). Figure 1. The Backup Infrastructure The following sections include information on testing the backup infrastructure before tuning the performance of VERITAS NetBackup software. This can help you gain an accurate assessment of the environment independent of NetBackup. Tape Backup Recommendations Determining the rated throughput of a tape drive is relatively easy. Many tape drive manufacturers have utilities that measure throughput. If you have no access to a utility, a simple test using dd or tar with null input achieves the same result. On UNIX systems, iostat and SAR also provide tape performance statistics. On Windows systems, the NetBackup performance counters can be analyzed. As a rule of thumb, it is not advisable to saturate a tape drive connection. There should be no more than two drives per SCSI/FC bus. Also, bear in mind that the host bus adapter (HBA) is an integral part of the hardware connection. If an HBA device is rated at 100 MB/sec, then assume an optimal loading of 70 MB/sec using the 70 percent rule. VERITAS ARCHITECT NETWORK 4
5 Network Recommendations Figure 2. The Tape Configuration The network infrastructure offers many different components that can affect backup throughput. These components include slow DNS servers, incorrect or outdated network interface (NIC) drivers, unplanned network traffic, and over-taxed CPU resources on either side of the network connection. Iperf which is a free utility for Windows, UNIX, and Linux that tests TCP and UDP bandwidth is useful for testing a network connection. The Iperf utility isolates the network, removing tape and disk from the equation, and tests network throughput in either direction. It is a good idea to run tests at different times of day to gain a complete picture of network usage and throughput. If Iperf is not available, FTP can also be used to study performance. VERITAS ARCHITECT NETWORK 5
6 A Network Bottleneck Case Study Figure 3. The Network Configuration A VERITAS customer experiencing slow and inconsistent network throughput approached the technical support desk with a problem. While running backup processing, it appeared that Windows servers performed better than Solaris, and some end users experienced no problems at all. Using Iperf to test a 100 MB segment of the network, the customer discovered a switch with an out-of-date NIC driver. The switch was also exhibiting problems negotiating half/full duplex. After upgrading the NIC driver and resolving the duplex negotiation problem, all Iperf tests were rerun. The result showed a dramatic improvement in network throughput. Figure 4. Slow Network Case Study VERITAS ARCHITECT NETWORK 6
7 Storage Recommendations The storage subsystem presents more software, hardware, and network components that potentially interfere with backup performance. Disk systems are becoming increasingly prevalent as intermediary storage locations in backup processing, and as permanent augmentations to a tape storage system. Although SCSI disk is still regarded as the fastest, ATA and IDE protocols are also popular, and many ATA drives are being designed as tape drive replacements. Understanding the differences between the various drive protocols is important. Also, remember that storage exists in many locations within the backup infrastructure not only at the application data store. The NetBackup client contains disk storage, and the NetBackup server hosts the catalogs that are used during backup and restore operations. It is essential to understand the capabilities of each component of the storage infrastructure before assessing the optimum throughput. This includes the specification for each HBA and network connection for example, whether it is SCSI or Fibre Channel. Again, free utilities are available to measure storage subsystem performance (bonnie; Bonnie++; and tiobench), and you should use the 70 percent rule to determine realistic throughput expectations. Figure 5. The Storage Subsystem Configuration VERITAS ARCHITECT NETWORK 7
8 Tuning VERITAS NetBackup Software Testing the backup infrastructure gives an accurate assessment of the environment independent of NetBackup software. The next step is to understand how NetBackup software relates to this infrastructure. Capacity Planning Before making any attempt to address the NetBackup software environment, it is necessary to understand the data being backed up. To determine how to configure NetBackup software appropriately, you must answer the following questions: Is the data being backed up to a local or remote location? What are the characteristics of the storage subsystem and data path? How busy is the storage subsystem? How much data is being backed up? How much of the data being backed up will change on a regular basis? What type of data is being backed up text, graphics, or databases? How many files are being backed up, and how compressible are the files? Sizing the NetBackup Media Server When sizing the NetBackup server, bear in mind that for this platform I/O performance is generally more important than CPU performance. Also, when choosing a server consider the potential for future CPU, memory, and I/O expansion. Typically, 1 MB/sec of data movement requires 5 MHz of CPU capacity. Gigabit Ethernet configurations should add 1 CPU per GbE card unless the cards contain their own CPU. This is a conservative estimate and mileage will vary depending on the server hardware and OS. Note that this calculation includes movement of data both in and out of the Media Server. For example, a system to back up clients over a network to a local tape drive at the rate of 10 MB/sec would need 100 MHz of available CPU power. This equates to 50 MHz to move data from the network to NetBackup Media Server, and 50 MHz to move data from the NetBackup server to tape. Benchmark studies have found that the ratio of MHz to MB/sec increases as the number of CPUs decreases, and the ratio increases as the load on the CPU approaches 100 percent. When sizing the NetBackup Media Server, also bear in mind that the server OS and other server applications will be consuming CPU resources. Memory usage Memory is relatively cheap and, as far as the NetBackup server is concerned, more is always better. At least 512 MB is recommended (Java GUI). NetBackup software uses shared memory for local backups. NetBackup buffer usage will affect this (see Buffer Management). Don t forget that other applications are also running. VERITAS ARCHITECT NETWORK 8
9 Configuring Multiplexing and Multi-Streaming Tape drives operate at their best when they receive a sustained stream of data that matches the throughput capacity of the drive. Continuous start-and-stop processing (the result of writing several small files) is inefficient, adds significant wear and tear to the drive mechanism, and reduces the lifespan of the hardware. Keeping tape drives streaming, however, can be a challenge. Figure 6. Multiplexing and Multi-Streaming to Tape Multiplexing and multi-streaming provide powerful techniques for generating data streams to maximize the efficient use of tape and minimize wear and tear on tape drives. NetBackup software supports concurrent use of both multiplexing and multi-streaming to tape. Multiplexing allows multiple streams of data, from one or more NetBackup clients, to be simultaneously written to a single tape drive. This can help generate sufficient data to maintain streaming at the tape drive. Restoring data from a multiplexed tape requires all contents of tape to be read, both applicable and non-applicable, and this will slow the restore process. Multi-streaming enables a single NetBackup client to send multiple data streams to one or more tape drives. In most cases, disk access is much faster than tape, and multi-streaming can be used to efficiently spread the backup load across many tape drives to speed the backup process. Each physical disk configured for multi-streaming must be accessed by no more than one streaming process at a time. If a single disk is the target of multiple streams, it will cause disk thrashing and significantly reduce the performance of the read process. The optimal values for multiplexing and multi-streaming are achieved when tape drives are able to sustain streaming. The NEW_STREAM directive is useful for fine-tuning streams to ensure no disk subsystem is under- or over-utilized. Incremental backups often benefit from multiplexing, and staging the incrementals to disk before starting the multiplexed backup process is worth considering. Local backups typically benefit less from multiplexing and can be configured with lower values. VERITAS ARCHITECT NETWORK 9
10 Compression Compressing the backup data stream can reduce the amount of data sent over the network and minimize the size of the backup on tape. If a NetBackup client configuration suffers from a slow network connection, software compression at the client can speed the backup process. If backup media usage is the primary concern, compression at the tape drive (hardware compression) will reduce the amount of data being offloaded. Whether using tape compression or client-side hardware or software compression, there are some limitations to be aware of. Compression can be configured at the client in the software or at the tape level (hardware compression). However, both must never be activated at the same time. The algorithms used by the compression processes can conflict, resulting in a data stream larger than the original file. This will have the opposite effect on performance than what you desired. The same result can occur if both client-side and tape-side compression are activated. Client-side compression creates additional processing overhead at the client. The overhead of compressing data increases the elapsed time of backup and restore processing, slowing the transfer rate between the client and the NetBackup Media Server. If the transfer rate is insufficient to maintain tape streaming, compression will result in additional wear and tear on the tape drives, although multiplexing multiple clients to a single tape drive can offset the problem. Compression at the tape drive offloads processing overhead from client and server and is always preferable if the objective is to reduce the amount of data written to tape. If client-side software compression is used, the bp.conf MEGABYTES_OF_MEMORY entry at the client may help performance. The bp.conf entry COMPRESS_SUFFIX indicates files that the compression routines must bypass. Buffer Management NetBackup software uses shared memory to buffer data between the network, or disk, and tape drive. Four of the NetBackup buffer management parameters can be modified: NET_BUFFER_SZ, the communications buffer setting; SIZE_DATA_BUFFERS, the size of each buffer; NUMBER_DATA_BUFFERS, the number of buffers; and NUMBER_DATA_BUFFERS_RESTORE, the number of restore buffers. Default values for the SIZE_DATA_BUFFERS, NUMBER_DATA_BUFFERS, and NUMBER_DATA_BUFFERS_RESTORE parameters are changed by creating files in the NetBackup directory /usr/open/netbackup/db/config. Each file is named after the parameter it modifies and contains a single value that represents the new buffer setting. NET_BUFFER_SZ Changes to the NET_BUFFER_SZ parameter adjust the TCP/IP socket buffer size, which is used in data transfers between the NetBackup Media Server and UNIX clients. NET_BUFFER_SZ is modified by creating a file named NET_BUFFER_SZ in the /usr/open/netbackup directory on the NetBackup Master, NetBackup Media Server, and NetBackup client. These files contain identical values, representing the new communications buffer size, overriding the NetBackup UNIX default of 32,032. On Windows systems, the optimal NET_BUFFER_SZ setting is 132,096, representing twice the network buffer size plus 1 KB (double the buffer size plus the TCP socket buffer). This value is modified through the Windows Client tab of the Client Properties menu. VERITAS ARCHITECT NETWORK 10
11 SIZE_DATA_BUFFERS The SIZE_DATA_BUFFERS parameter indicates how much shared memory NetBackup software uses to buffer data between a disk and tape. The default values are 32 KB for non-multiplexed UNIX, 64 KB for multiplexed UNIX, and 64 KB, for Windows, regardless of multiplexing. To override the default setting, a value must be entered in the SIZE_DATA_BUFFERS file. This value must be a power of 2 and entered as an exact number of bytes - for example, 32 KB would be entered as (32 multiplied by 1024). Changes to buffer values do not require NetBackup processes to be bounced. Modifying a buffer value is sufficient to cause the next backup run to use the new values. Restore processes use the same buffer values that were set at the time the backup was taken. The I/O size of the backup tape is derived from the SIZE_DATA_BUFFERS value. This buffer setting may be restricted by the maximum allowable value at the tape device or file system. Network configuration parameters must also be taken into account when adjusting buffer size. The TCP/IP maximum transmission unit (MTU) value for the LAN segment between the NetBackup client and Media Server may require changing to accommodate adjustments to the SIZE_DATA_BUFFERS value. NUMBER_DATA_BUFFERS The default NUMBER_DATA_BUFFERS setting is 8 for non-multiplexed environments, 4 for multiplexed environments, and 8 for non-multiplexed restore, verify, import, and duplicate. These values are doubled for Windows NT environments. NUMBER_DATA_BUFFERS_RESTORE NUMBER_DATA_BUFFERS_RESTORE is a NetBackup 4.5 parameter, and is only applicable to multiplexed restores. The default value is 8 for a non-multiplexed image, and 12 for a multiplexed image. This buffer parameter can be helpful when tuning the restore of multiple concurrent databases. Buffers and System Memory Changing the size and number of NetBackup data buffers impacts available shared memory, which is a limited system resource. The total amount of shared memory used for each tape drive can be calculated using the following formula. Shared memory = (SIZE_DATA_BUFFERS * NUMBER_DATA_BUFFERS) * drives * MPX It is important to monitor the impact of modifications to the buffer parameters carefully. Not all changes have a positive effect. Although rare, changes to the buffer size have been known to slow backup processing and create problems during restores. Any change to buffer parameters must be thoroughly tested by performing both backup and restore processes. You can write to the tape drive with buffers greater than 256 KB, but recovery with certain tape drives may not be possible with this buffer size. After changing the buffer size, always test the recovery. BPTM Log Contents Analyzing data from the BPTM log indicates whether buffer settings must be modified. The wait and delay counters in the log help to determine where changes are needed and the impact the change has made. Figure 7 shows the BPTM entries that apply to buffer settings. VERITAS ARCHITECT NETWORK 11
12 Figure 7. NetBackup BPTM Log Entries for Buffer Parameters UNIX memory use is monitored using the vmstat utility. The vmstat scan rate indicates the amount of swapping activity taking place. SAR also provides insight into UNIX memory use. On Windows systems, the Windows Performance Monitor utility can be used to track memory utilization. Figure 8. NetBackup Media Server Processes Producer-Consumer Relationship The goal in managing NetBackup buffers is to achieve a balance between data producer and data consumer (see Figure 8). The data producer (this is the NetBackup client during a backup process and NetBackup Media Server during restore) must have an empty buffer to write to. If it is not available, it will wait and cause a delay. The data consumer (this is the NetBackup Media Server during a backup and the NetBackup Client when restoring) must have a full data buffer to read from. Entries in the BPTM log show the number of times BPTM waited and the exact number of times this caused a delay. A high wait for empty buffers indicates that the data producer (the BPTM child process) is receiving data from the source faster than it is being processed by the data consumer (the BPTM parent process). If this is the result of a multiplexed backup, reducing the number of plexes may alleviate the problem. Adding buffers is another alternative. If the BPTM log indicates that the parent process is waiting for full buffers, this means the tape drive is starving for data. In this situation, changing buffer values will not help, but adding multiplexing could help. The actual delay time can be calculated by multiplying waits for empty buffers by 20 milliseconds and waits for full buffers by 30 milliseconds, and then adding these two numbers. For example, a child process wait count of 65,084 equates to a delay of 21.7 minutes (65,084 multiplied by 20 milliseconds). VERITAS ARCHITECT NETWORK 12
13 The BPTM log delay values can also be viewed as a ratio. An ideal ratio for number of times waited to number of times delayed is 1:1. Improving Restore Performance Restore performance problems commonly result from one of four issues: Improperly set multiplexing Poor index performance Sub-optimal fragment size MPX_RESTORE_DELAY and NUMBER_DATA_BUFFERS_RESTORE settings. The ideal multiplexing setting is one that is only as high as needed to continuously stream the tape drives. Improperly set multiplexing can result in more tape searching than is necessary. However, do not assume that multiplexing is always the cause of performance problems, because this is unlikely. Performance of the disk subsystem hosting the NetBackup image catalog indexes significantly impacts restore processing. The disk subsystem must be configured for high-performance read processing to allow fast location of catalog entries. Indexing the catalog will also improve restore speed. Testing performance of the catalog disk subsystem determines whether or not indexes are slowing restores. NetBackup software can perform multiple restores from a single multiplexed tape. Setting the MPX_RESTORE_DELAY to 30 seconds a change that does not require processes to be brought down and restarted will improve restore processing. The NUMBER_DATA_BUFFERS_RESTORE parameter enables other NetBackup processes to stay busy while a multiplexed tape is being positioned during restore processing. However, the buffer will cause more physical RAM to be used and should be factored into overall memory usage calculations. Disk-Based Storage Storing backup data on disk generally provides a faster restore process. However, disk should never be considered a replacement for tape when used for long-term archival purposes. Tuning disk-based storage for performance requires the same approach as tape-based storage. Optimum buffer settings are likely to vary, depending on the specifics of the site configuration, and thorough testing will determine the best values. Disk-based backup storage can be useful if you have a lot of incremental backups and the percentage of data change is small. If the volume of data in incremental copies is insufficient to ensure streaming to tape drives, writing to disk can speed the backup process and alleviate wear and tear on the tape drives. VERITAS ARCHITECT NETWORK 13
14 Indexing the NetBackup Image Catalog Environments that contain a large number of backups can see measurable improvement from the indexing of images in the NetBackup image ASCII catalog. Indexes allow NetBackup software to go directly to the entry for a file rather than search the entire catalog. Figure 9 shows the command used to generate indexes for NetBackup clients. The level value refers to the number of levels in the hierarchy of directories from which files were backed up. For example, if the search is for /payroll/smith/taxes/97, and level has a value of 2, NetBackup software begins the search at /payroll/smith. The default value for level is 9. Figure 9. NetBackup index_clients Command The NetBackup index_clients command is run once to activate indexing for a client. Subsequent indexing occurs automatically when NetBackup performs nightly cleanup operations. To index a selection of clients, the index_clients command must be run for each client. (Use of wildcard characters in the client_name is not permitted.) The NetBackup image catalog index files do not require very much space. Regardless of how many clients are indexed and to what level indexing is set, the index will increase the size of the catalog by approximately 1.5 percent. NetBackup software does not produce index files for backups containing fewer than 200 files. Image catalog index files reside in the directory /usr/openv/netbackup/db/images/lientname/index. Indexing level resides in the file /usr/openv/netbackup/db/images/lientname/indexlevel. Deleting the INDEXLEVEL file for a client stops NetBackup software from generating new indexes for a client, although restores will continue to use the index. Renaming the INDEX directory to INDEX.ignore will cause NetBackup to temporarily stop using the index during restores. The rename does not delete index information, and reverting to the old name allows NetBackup software to resume index use. Deleting the INDEX and INDEXLEVEL files for a client will permanently end indexing for the client. Thousands of Small Backups If a client produces thousands of small backup images, search performance can be improved by the bpimage command. VERITAS ARCHITECT NETWORK 14
15 Figure 10. The bpimage Command Format and Output Directory Contents The files IMAGE_LIST, IMAGE_INFO, and IMAGE_FILES must not be edited, because they contain offsets and byte counts used during the seek process. These files add 30 to 40 percent more data to the client directory. Determining Read and Write Performance of Backups and Restores The NetBackup error log located at /usr/open/netbackup/db/error records the data streaming speed, in kilobytes per second, for each backup and restore process. Searching the log for the string Kbytes/sec will locate a log record that looks similar to the example in Figure 11. Kbytes/sec will vary depending on many factors, including the availability of system resources and system utilization, but the data can be used to assess the performance of the data streaming process. Figure 11. Sample NetBackup Error Log The statistics from the NetBackup error log show the actual amount of time spent reading and writing data to and from tape. This does not include time spent mounting and positioning the tape. Crossreferencing the information from the error log with data from the bpbkar log on the NetBackup client (showing the end-to-end elapsed time of the entire process) indicates how much time was spent on operations unrelated to reading and writing from and to tape. Each stream of multiplexed data also has an entry in the error log (see figure 12). Figure 12. Sample NetBackup Error Log for Multiplexing VERITAS ARCHITECT NETWORK 15
16 The final entry in the log for the multiplexed backup shows performance data. Figure 13 shows the performance data for the multiplexed backup listed in figure 12. Figure 13. Sample NetBackup Error Log Showing Multiplexing Performance Statistics Figure 14 shows the NetBackup error log entry for a restore. Figure 15 shows the same information for the restore from a multiplexed backup. Figure 14. Sample NetBackup Error Log Showing Restore Statistics Figure 15. Sample NetBackup Error Log Showing Statistics for a Restore from Multiplexed Backup Log Files The NetBackup logging facility has the potential to impact the performance of backup and recovery processing. However, logging is usually enabled only when troubleshooting a NetBackup problem, and the short-term nature of the performance impact can often be endured. The performance impact can be determined by the amount of logging used and the verbosity level set. Java User Interface Performance In some NetBackup environments, the Java GUIs may seem to run slowly. Increasing the amount of memory allocated to the GUIs can resolve this problem. The Java GUIs are initiated by executing the /usr/openv/netbackup/bin/jnbsa command. These executables are scripts that set the Java environment before invoking the user interface. Changing the memory allocation for each GUI requires you to edit the script and change the memory allocation value. VERITAS ARCHITECT NETWORK 16
17 Each Java GUI script contains a line with the variables ms4m and mx32m. These values represent the memory allocation defaults for the script, equating to 4 MB minimum and 32 MB maximum. To increase the memory available to the GUIs, change the variables as follows: To set minimum memory allocation to 32 MB, change ms4m to ms32m. To change the maximum memory allocation to 256 MB, change mx32m to mx256m. Summary Obtaining the best performance from a backup infrastructure is not complex, but it requires careful review of the many factors that can affect processing. The first step in any attempt to improve backup performance is to gain an accurate assessment of each hardware, software, and networking component in the backup data path. Many performance problems are resolved before attempting to change NetBackup parameters. NetBackup software offers plenty of resources to help isolate performance problems and assess the impact of configuration changes. However, it is essential to thoroughly test both backup and restore processes after making any changes to NetBackup configuration parameters. Appendix Operating System Settings Minimum Solaris Kernel Parameters The Solaris operating system dynamically builds the OS kernel with each boot of the system. For other UNIX operating systems, please reference the system administration procedures to determine how to rebuild the kernel. The parameters below reflect minimum settings recommended for a system dedicated to running VERITAS NetBackup software. If other applications are running on the server, such as the Oracle RDBMS, the values may need to be increased. The msgsys:msginfo_msgssz, msgsys:msginfo_msgmap, and msgsys:msginfo_msgseg parameters became obsolete with Solaris 8. These variables have been left in place below to avoid error messages. Any values applied are ignored. set msgsys:msginfo_msgmap=500 The number of elements in the map used to allocate message segments. set msgsys:msginfo_msgmnb=65536 The maximum length of a message queue in bytes. The length of the message queue is the sum of the lengths of all the messages in the queue. set msgsys:msginfo_msgssz=8 The size of the message segment in bytes. set msgsys:msginfo_msgseg=8192 VERITAS ARCHITECT NETWORK 17
18 The maximum number of message segments. The kernel reserves a total of msgssz * msgseg bytes for message segments and must be less then 128 KB. Together, msgssz and msgseg limit the amount of text for all outstanding messages. set msgsys:msginfo_msgtql=500 The maximum number of outstanding messages system-wide that are waiting to be read across all message queues. set semsys:seminfo_semmni=300 The maximum number of semaphore sets system-wide. set semsys:seminfo_semmns=300 The maximum number of semaphores system-wide. set semsys:seminfo_semmsl=300 The maximum number of semaphores per set. set semsys:seminfo_semmnu=600 The maximum number of "undo" structures, system-wide. set shmsys:shminfo_shmmax= The maximum size of a shared memory segment. set shmsys:shminfo_shmmin=1 The minimum size of a shared memory segment. set shmsys:shminfo_shmmni=100 The maximum number of shared memory segments that the system will support. set shmsys:shminfo_shmseg=10 The maximum number of shared memory segments that can be attached to a given process at one time. The ipcs -a command displays system resources and their allocation, and is a useful command to use when a process is hanging or sleeping to see if there are available resources for it to use. VERITAS ARCHITECT NETWORK 18
19 TCP_CLOSE_WAIT_INTERVAL The TCP_CLOSE_WAIT_INTERVAL parameter sets the amount of time to wait after a TCP socket is closed before it can be reused. The current value for TCP_CLOSE_WAIT_INTERVAL can be viewed using the commands shown below: For Solaris 2.6 or previous use the following command: ndd -get /dev/tcp tcp_close_wait_interval For Solaris 7 or above use the following command: ndd -get /dev/tcp tcp_time_wait_interval For HP-UX 11 use the following command: ndd -get /dev/tcp tcp_time_wait_interval For HP-UX 10 use the "nettune" command instead of "ndd. Running these commands will produce a large number. The default value for most systems is , representing 4 minutes (240 seconds) in milliseconds. Additional Web Resources The following Web sites provide information on performance monitoring utilities: Iperf Bonnie Bonnie++ Tiobench VERITAS ARCHITECT NETWORK 19
20 VERITAS ARCHITECT NETWORK VERITAS Software Corporation Corporate Headquarters 350 Ellis Street Mountain View, CA or For additional information about VERITAS Software, its products, VERITAS Architect Network, or the location of an office near you, please call our corporate headquarters or visit our Web site at Copyright 2003 VERITAS Software Corporation. All rights reserved. VERITAS, the VERITAS Logo and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation. VERITAS, the VERITAS Logo Reg. U.S. Pat. & Tm. Off. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice.
BEST PRACTICES FOR PROTECTING MICROSOFT EXCHANGE DATA
BEST PRACTICES FOR PROTECTING MICROSOFT EXCHANGE DATA Bill Webster September 25, 2003 VERITAS ARCHITECT NETWORK TABLE OF CONTENTS Introduction... 3 Exchange Data Protection Best Practices... 3 Application
NetBackup Performance Tuning on Windows
NetBackup Performance Tuning on Windows Document Description This document contains information on ways to optimize NetBackup on Windows systems. It is relevant for NetBackup 4.5 and for earlier releases.
PERFORMANCE TUNING ORACLE RAC ON LINUX
PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
Symantec NetBackup Backup Planning and Performance Tuning Guide
Symantec NetBackup Backup Planning and Performance Tuning Guide UNIX, Windows, and Linux Release 7.0 through 7.1 Symantec NetBackup Backup Planning and Performance Tuning Guide The software described in
Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server
Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance
WHITE PAPER BRENT WELCH NOVEMBER
BACKUP WHITE PAPER BRENT WELCH NOVEMBER 2006 WHITE PAPER: BACKUP TABLE OF CONTENTS Backup Overview 3 Background on Backup Applications 3 Backup Illustration 4 Media Agents & Keeping Tape Drives Busy 5
Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup
Technical white paper Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup Table of contents Executive summary... 2 Introduction... 2 What is NDMP?... 2 Technology overview... 3 HP
VERITAS NetBackup 6.0 Enterprise Server INNOVATIVE DATA PROTECTION DATASHEET. Product Highlights
VERITAS NetBackup 6.0 Enterprise Server INNOVATIVE DATA PROTECTION As the recognized leader for enterprise-class backup and recovery, VERITAS NetBackup Enterprise Server software is designed to help provide
Veritas NetBackup 6.0 Server Now from Symantec
Veritas NetBackup 6.0 Server Now from Symantec Innovative data protection As the recognized leader for enterprise-class backup and recovery, Veritas NetBackup Server software is designed to help provide
Symantec NetBackup Backup Planning and Performance Tuning Guide
Symantec NetBackup Backup Planning and Performance Tuning Guide UNIX, Windows, and Linux Release 7.5 and Release 7.6 Symantec NetBackup Backup Planning and Performance Tuning Guide The software described
Backup architectures in the modern data center. Author: Edmond van As [email protected] Competa IT b.v.
Backup architectures in the modern data center. Author: Edmond van As [email protected] Competa IT b.v. Existing backup methods Most companies see an explosive growth in the amount of data that they have
Maximizing Backup and Restore Performance of Large Databases
Maximizing Backup and Restore Performance of Large Databases - 1 - Forward (from Meta Group) Most companies critical data is being stored within relational databases. Over 90% of all mission critical systems,
W H I T E P A P E R : D A T A P R O T E C T I O N. Backing Up VMware with Veritas NetBackup. George Winter January 2009
W H I T E P A P E R : D A T A P R O T E C T I O N Backing Up VMware with Veritas NetBackup George Winter January 2009 Contents 1.0 EXECUTIVE OVERVIEW... 3 1.1 INTENDED AUDIENCE... 3 1.2 GLOSSARY... 3 1.3
Symantec NetBackup SAN Client and Fibre Transport Guide
Symantec NetBackup SAN Client and Fibre Transport Guide UNIX, Windows, Linux Release 7.6 Symantec NetBackup SAN Client and Fibre Transport Guide The software described in this book is furnished under a
VERITAS Storage Foundation 4.3 for Windows
DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications
VERITAS NetBackup BusinesServer
VERITAS NetBackup BusinesServer A Scalable Backup Solution for UNIX or Heterogeneous Workgroups V E R I T A S W H I T E P A P E R Table of Contents Overview...................................................................................1
Protect Microsoft Exchange databases, achieve long-term data retention
Technical white paper Protect Microsoft Exchange databases, achieve long-term data retention HP StoreOnce Backup systems, HP StoreOnce Catalyst, and Symantec NetBackup OpenStorage Table of contents Introduction...
Symantec NetBackup OpenStorage Solutions Guide for Disk
Symantec NetBackup OpenStorage Solutions Guide for Disk UNIX, Windows, Linux Release 7.6 Symantec NetBackup OpenStorage Solutions Guide for Disk The software described in this book is furnished under a
WHITE PAPER: customize. Best Practice for NDMP Backup Veritas NetBackup. Paul Cummings. January 2009. Confidence in a connected world.
WHITE PAPER: customize DATA PROTECTION Confidence in a connected world. Best Practice for NDMP Backup Veritas NetBackup Paul Cummings January 2009 Best Practice for NDMP Backup Veritas NetBackup Contents
Optimizing LTO Backup Performance
Optimizing LTO Backup Performance July 19, 2011 Written by: Ash McCarty Contributors: Cedrick Burton Bob Dawson Vang Nguyen Richard Snook Table of Contents 1.0 Introduction... 3 2.0 Host System Configuration...
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
BrightStor ARCserve Backup for Windows
BrightStor ARCserve Backup for Windows Tape RAID Option Guide r11.5 D01183-1E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the end user's
VERITAS Backup Exec 9.0 for Windows Servers
WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS
Confidence in a connected world. Veritas NetBackup 6.5 for VMware 3.x Best Practices
WHITE PAPER: Best Practices Confidence in a connected world. Veritas NetBackup 6.5 for VMware 3.x Best Practices White Paper: Best Practices Veritas NetBackup 6.5 for VMware 3.x Best Practices November
QUICK START LINUX AGENT INSTALLATION, CONFIGURATION AND TROUBLESHOOTING GUIDELINES
QUICK START LINUX AGENT INSTALLATION, CONFIGURATION AND TROUBLESHOOTING GUIDELINES Introduction: How the Client Agent works (Push Technology) The Common Agent Installation: Installing the agent from CD
Using HP StoreOnce D2D systems for Microsoft SQL Server backups
Technical white paper Using HP StoreOnce D2D systems for Microsoft SQL Server backups Table of contents Executive summary 2 Introduction 2 Technology overview 2 HP StoreOnce D2D systems key features and
The Revival of Direct Attached Storage for Oracle Databases
The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to
TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to
Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade
Application Note Windows 2000/XP TCP Tuning for High Bandwidth Networks mguard smart mguard PCI mguard blade mguard industrial mguard delta Innominate Security Technologies AG Albert-Einstein-Str. 14 12489
Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices
Sawmill Log Analyzer Best Practices!! Page 1 of 6 Sawmill Log Analyzer Best Practices! Sawmill Log Analyzer Best Practices!! Page 2 of 6 This document describes best practices for the Sawmill universal
Symantec NetBackup SAN Client and Fibre Transport Guide
Symantec NetBackup SAN Client and Fibre Transport Guide UNIX, Windows, Linux Release 7.5 Symantec NetBackup SAN Client and Fibre Transport Guide The software described in this book is furnished under a
WHITE PAPER. VERITAS Bare Metal Restore AUTOMATED SYSTEM RECOVERY WITH VERITAS NETBACKUP
WHITE PAPER VERITAS Bare Metal Restore AUTOMATED SYSTEM RECOVERY WITH VERITAS NETBACKUP 1 INTRODUCTION...3 CURRENT METHODS FALL SHORT...3 THE REINSTALL-AND-RESTORE METHOD...3 Uncertain Recovery State...3
Symantec NetBackup 7 Clients and Agents
Complete protection for your information-driven enterprise Overview Symantec NetBackup provides a simple yet comprehensive selection of innovative clients and agents to optimize the performance and efficiency
Running a Workflow on a PowerCenter Grid
Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)
Symantec NetBackup for NDMP Administrator's Guide
Symantec NetBackup for NDMP Administrator's Guide UNIX, Windows, and Linux Release 7.5 Symantec NetBackup for NDMP Administrator's Guide The software described in this book is furnished under a license
EMC Data Domain Boost for Symantec NetBackup OpenStorage
EMC Data Domain Boost for Symantec NetBackup OpenStorage Best Practices Planning Abstract Conceptually understanding EMC Data Domain Boost for Symantec NetBackup provides a clear view of the business value
VERITAS NetBackup TM 6.0
VERITAS NetBackup TM 6.0 System Administrator s Guide, Volume II for UNIX and Linux N15258B September 2005 Disclaimer The information contained in this publication is subject to change without notice.
Performance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
<Insert Picture Here> RMAN Configuration and Performance Tuning Best Practices
1 RMAN Configuration and Performance Tuning Best Practices Timothy Chien Principal Product Manager Oracle Database High Availability [email protected] Agenda Recovery Manager
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
EMC Backup Storage Solutions: The Value of EMC Disk Library with TSM
A Detailed Review Abstract The white paper describes how the EMC Disk Library can enhance an IBM Tivoli Storage Manager (TSM) environment. It describes TSM features, the demands these features place on
Keys to optimizing your backup environment: Veritas NetBackup
Keys to optimizing your backup environment: Veritas NetBackup Jeff Harbert Storage Consultant GlassHouse Technologies, Inc. [email protected] Introduction Audience Profile Storage Management Interdependence
WHITE PAPER: DATA PROTECTION. Veritas NetBackup for Microsoft Exchange Server Solution Guide. Bill Roth January 2008
WHITE PAPER: DATA PROTECTION Veritas NetBackup for Microsoft Exchange Server Solution Guide Bill Roth January 2008 White Paper: Veritas NetBackup for Microsoft Exchange Server Solution Guide Content 1.
Symantec NetBackup for Lotus Notes Administrator's Guide
Symantec NetBackup for Lotus Notes Administrator's Guide for UNIX, Windows, and Linux Release 7.5 Symantec NetBackup for Lotus Notes Administrator's Guide The software described in this book is furnished
Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking
Network Storage for Business Continuity and Disaster Recovery and Home Media White Paper Abstract Network storage is a complex IT discipline that includes a multitude of concepts and technologies, like
EMC CLARiiON Backup Storage Solutions
Engineering White Paper Backup-to-Disk: An Overview Abstract This white paper is an overview of disk-based backup methodologies. It compares disk and tape backup topologies and describes important considerations
How To Make A Backup System More Efficient
Identifying the Hidden Risk of Data De-duplication: How the HYDRAstor Solution Proactively Solves the Problem October, 2006 Introduction Data de-duplication has recently gained significant industry attention,
Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note
Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Internet SCSI (iscsi)
VERITAS Business Solutions. for DB2
VERITAS Business Solutions for DB2 V E R I T A S W H I T E P A P E R Table of Contents............................................................. 1 VERITAS Database Edition for DB2............................................................
Hardware Configuration Guide
Hardware Configuration Guide Contents Contents... 1 Annotation... 1 Factors to consider... 2 Machine Count... 2 Data Size... 2 Data Size Total... 2 Daily Backup Data Size... 2 Unique Data Percentage...
EMC DATA DOMAIN OPERATING SYSTEM
ESSENTIALS HIGH-SPEED, SCALABLE DEDUPLICATION Up to 58.7 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability DATA INVULNERABILITY ARCHITECTURE Inline write/read
BrightStor ARCserve Backup for Windows
BrightStor ARCserve Backup for Windows Agent for Microsoft SQL Server r11.5 D01173-2E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the
VERITAS Volume Management Technologies for Windows
WHITE PAPER VERITAS Volume Management Technologies for Windows V E R I T A S W H I T E P A P E R The Next Generation of Disk Management for Windows Platforms Windows 2000 and Windows Server 2003 1 TABLE
Configuring Backup Settings. Copyright 2009, Oracle. All rights reserved.
Configuring Backup Settings Objectives After completing this lesson, you should be able to: Use Enterprise Manager to configure backup settings Enable control file autobackup Configure backup destinations
Microsoft Exchange Server 2003 Deployment Considerations
Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
Performance analysis of a Linux based FTP server
Performance analysis of a Linux based FTP server A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Technology by Anand Srivastava to the Department of Computer Science
Protecting enterprise servers with StoreOnce and CommVault Simpana
Technical white paper Protecting enterprise servers with StoreOnce and CommVault Simpana HP StoreOnce Backup systems Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Disk-to-Disk-to-Offsite Backups for SMBs with Retrospect
Disk-to-Disk-to-Offsite Backups for SMBs with Retrospect Abstract Retrospect backup and recovery software provides a quick, reliable, easy-to-manage disk-to-disk-to-offsite backup solution for SMBs. Use
Eliminating Backup System Bottlenecks: Taking Your Existing Backup System to the Next Level. Jacob Farmer, CTO, Cambridge Computer
: Taking Your Existing Backup System to the Next Level Jacob Farmer, CTO, Cambridge Computer SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
EMC Data Domain Boost for Oracle Recovery Manager (RMAN)
White Paper EMC Data Domain Boost for Oracle Recovery Manager (RMAN) Abstract EMC delivers Database Administrators (DBAs) complete control of Oracle backup, recovery, and offsite disaster recovery with
Using HP StoreOnce Backup systems for Oracle database backups
Technical white paper Using HP StoreOnce Backup systems for Oracle database backups Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key features and benefits 2 HP StoreOnce
EMC DATA DOMAIN OPERATING SYSTEM
EMC DATA DOMAIN OPERATING SYSTEM Powering EMC Protection Storage ESSENTIALS High-Speed, Scalable Deduplication Up to 58.7 TB/hr performance Reduces requirements for backup storage by 10 to 30x and archive
VERITAS NetBackup 6.0 for SAP
VERITAS NetBackup 6.0 for SAP System Administrator s Guide for UNIX and Linux N15265B September 2005 Disclaimer The information contained in this publication is subject to change without notice. VERITAS
Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University
VERITAS NetBackup 6.0 Database and Application Protection
VERITAS NetBackup 6.0 Database and Application Protection INNOVATIVE DATA PROTECTION When it comes to database and application recovery, VERITAS Software has a clear goal in mind simplify the complexity
Distribution One Server Requirements
Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and
VERITAS Cluster Server v2.0 Technical Overview
VERITAS Cluster Server v2.0 Technical Overview V E R I T A S W H I T E P A P E R Table of Contents Executive Overview............................................................................1 Why VERITAS
Continuous Data Protection. PowerVault DL Backup to Disk Appliance
Continuous Data Protection PowerVault DL Backup to Disk Appliance Continuous Data Protection Current Situation The PowerVault DL Backup to Disk Appliance Powered by Symantec Backup Exec offers the industry
VERITAS Bare Metal Restore 4.6 for VERITAS NetBackup
VERITAS Bare Metal Restore 4.6 for VERITAS NetBackup System Administrator s Guide for UNIX and Windows N09870C Disclaimer The information contained in this publication is subject to change without notice.
EMC CLARiiON Backup Storage Solutions: Backup-to-Disk Guide with IBM Tivoli Storage Manager
EMC CLARiiON Backup Storage Solutions: Backup-to-Disk Guide with Best Practices Planning Abstract This white paper describes how to configure EMC CLARiiON CX Series storage systems with IBM Tivoli Storage
Symantec NetBackup for Microsoft SQL Server Administrator's Guide
Symantec NetBackup for Microsoft SQL Server Administrator's Guide for Windows Release 7.1 Symantec NetBackup NetBackup for Microsoft SQL Server Administrator's Guide The software described in this book
BrightStor ARCserve Backup for Windows
BrightStor ARCserve Backup for Windows Serverless Backup Option Guide r11.5 D01182-2E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the
VERITAS Backup Exec TM 10.0 for Windows Servers
VERITAS Backup Exec TM 10.0 for Windows Servers Quick Installation Guide N134418 July 2004 Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i. Performance Report
VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i Performance Report V E R I T A S W H I T E P A P E R Table of Contents Introduction.................................................................................1
Gigabit Ethernet Design
Gigabit Ethernet Design Laura Jeanne Knapp Network Consultant 1-919-254-8801 [email protected] www.lauraknapp.com Tom Hadley Network Consultant 1-919-301-3052 [email protected] HSEdes_ 010 ed and
New!! - Higher performance for Windows and UNIX environments
New!! - Higher performance for Windows and UNIX environments The IBM TotalStorage Network Attached Storage Gateway 300 (NAS Gateway 300) is designed to act as a gateway between a storage area network (SAN)
VERITAS VERTEX Initiative
VERITAS VERTEX Initiative Frequently Asked Questions Q1. What is the VERITAS VERTEX Initiative? A1. The VERITAS VERTEX Initiative is a set of NetBackup solutions from VERITAS that will deliver alternate
An Oracle White Paper September 2011. Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups
An Oracle White Paper September 2011 Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups Table of Contents Introduction... 3 Tape Backup Infrastructure Components... 4 Requirements...
VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5
Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.
Isilon OneFS. Version 7.2.1. OneFS Migration Tools Guide
Isilon OneFS Version 7.2.1 OneFS Migration Tools Guide Copyright 2015 EMC Corporation. All rights reserved. Published in USA. Published July, 2015 EMC believes the information in this publication is accurate
Application Brief: Using Titan for MS SQL
Application Brief: Using Titan for MS Abstract Businesses rely heavily on databases for day-today transactions and for business decision systems. In today s information age, databases form the critical
Turnkey Deduplication Solution for the Enterprise
Symantec NetBackup 5000 Appliance Turnkey Deduplication Solution for the Enterprise Mayur Dewaikar Sr. Product Manager, Information Management Group White Paper: A Deduplication Appliance Solution for
WHITE PAPER 1 WWW.FUSIONIO.COM
1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics
Version 5.0. MIMIX ha1 and MIMIX ha Lite for IBM i5/os. Using MIMIX. Published: May 2008 level 5.0.13.00. Copyrights, Trademarks, and Notices
Version 5.0 MIMIX ha1 and MIMIX ha Lite for IBM i5/os Using MIMIX Published: May 2008 level 5.0.13.00 Copyrights, Trademarks, and Notices Product conventions... 10 Menus and commands... 10 Accessing online
Symantec NetBackup 5220
A single-vendor enterprise backup appliance that installs in minutes Data Sheet: Data Protection Overview is a single-vendor enterprise backup appliance that installs in minutes, with expandable storage
The BRU Advantage. Contact BRU Sales. Technical Support. Email: [email protected] T: 480.505.0488 F: 480.505.0493 W: www.tolisgroup.
The BRU Advantage The following information is offered to detail the key differences between the TOLIS Group, Inc. BRU data protection products and those products based on the traditional tar and cpio
CA ARCserve and CA XOsoft r12.5 Best Practices for protecting Microsoft SQL Server
CA RECOVERY MANAGEMENT R12.5 BEST PRACTICE CA ARCserve and CA XOsoft r12.5 Best Practices for protecting Microsoft SQL Server Overview Benefits The CA Advantage The CA ARCserve Backup Support and Engineering
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
