EMC NetWorker. Performance Optimization Planning Guide. Version REV 01

Size: px
Start display at page:

Download "EMC NetWorker. Performance Optimization Planning Guide. Version 8.2 302-000-697 REV 01"

Transcription

1 EMC NetWorker Version 8.2 Performance Optimization Planning Gide REV 01

2 Copyright EMC Corporation. All rights reserved. Pblished in USA. Pblished Janary, 2015 EMC believes the information in this pblication is accrate as of its pblication date. The information is sbject to change withot notice. The information in this pblication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this pblication, and specifically disclaims implied warranties of merchantability or fitness for a particlar prpose. Use, copying, and distribtion of any EMC software described in this pblication reqires an applicable software license. EMC², EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other contries. All other trademarks sed herein are the property of their respective owners. For the most p-to-date reglatory docment for yor prodct line, go to EMC Online Spport ( EMC Corporation Hopkinton, Massachsetts In North America EMC NetWorker 8.2 Performance Optimization Planning Gide

3 CONTENTS Figres 5 Tables 7 Preface 9 Chapter 1 Overview 13 Introdction NetWorker data flow...14 Chapter 2 Size the NetWorker Environment 17 Expectations Determine backp environment performance expectations Determining the backp window Determining the reqired backp expectations System components...19 System...19 Memory reqirements System bs reqirements...22 Storage considerations Storage IOPS reqirements NetWorker server and storage node disk write latency...25 Storage performance recommendations...26 Backp operation reqirements NetWorker kernel parameter reqirements Parallel save stream considerations Internal maintenance task reqirements Network Target device...36 The component 70 percent rle...36 Components of a NetWorker environment Datazone NetWorker Management Console Console database NetWorker server...39 NetWorker storage node...40 NetWorker client NetWorker databases...42 Optional NetWorker Application Modles...42 Virtal environments...42 NetWorker dedplication nodes Recovery performance factors Connectivity and bottlenecks NetWorker database bottlenecks Chapter 3 Tne Settings 49 EMC NetWorker 8.2 Performance Optimization Planning Gide 3

4 CONTENTS Optimize NetWorker parallelism Server parallelism Client parallelism Grop parallelism Mltiplexing...51 File system density...51 Disk optimization Device performance tning methods Inpt/otpt transfer rate Bilt-in compression Drive streaming...52 Device load balancing Fragmenting a Disk drive Network devices...53 Fibre Channel latency...54 DataDomain...55 AFTD device target and max sessions Nmber of virtal device drives verss physical device drives Network optimization Advanced configration optimization...58 Operating system TCP stack optimization...58 Advanced tning Network latency Ethernet dplexing...60 Firewalls Jmbo frames...60 Congestion notification TCP bffers...61 Increase TCP backlog bffer size NetWorker socket bffer size...63 IRQ balancing and CPU affinity Interrpt moderation...64 TCP offloading...64 Name resoltion Chapter 4 Test Performance 67 Determine symptoms Monitor performance...68 Determining bottlenecks by sing a generic FTP test...69 Testing setp performance by sing dd Test disk performance by sing bigasm and asm...69 The bigasm directive The asm directive EMC NetWorker 8.2 Performance Optimization Planning Gide

5 FIGURES NetWorker backp data flow...14 NetWorker recover data flow NetWorker datazone components Network device bottleneck...43 Updated network Updated client...45 Dedicated SAN Raid array NetWorker server write throghpt degradation...48 Files verss throghpt Fibre Channel latency impact on data throghpt Network latency on 10/100 MB per second Network latency on 1 Gigabyte...60 EMC NetWorker 8.2 Performance Optimization Planning Gide 5

6 FIGURES 6 EMC NetWorker 8.2 Performance Optimization Planning Gide

7 TABLES Revision history... 9 Minimm reqired memory for the NetWorker server Bs specifications Disk write latency reslts and recommendations PSS spport for NetWorker 8.1.x and Reqired IOPS for NetWorker server operations...33 Disk drive IOPS vales The effect of blocksize on an LTO-4 tape drive...54 EMC NetWorker 8.2 Performance Optimization Planning Gide 7

8 TABLES 8 EMC NetWorker 8.2 Performance Optimization Planning Gide

9 Preface As part of an effort to improve its prodct lines, EMC periodically releases revisions of its software and hardware. Therefore, some fnctions described in this docment might not be spported by all versions of the software or hardware crrently in se. The prodct release notes provide the most p-to-date information on prodct featres. Contact yor EMC technical spport professional if a prodct does not fnction properly or does not fnction as described in this docment. Note This docment was accrate at pblication time. Go to EMC Online Spport ( spport.emc.com) to ensre that yo are sing the latest version of this docment. Prpose This docment describes how to test, plan and optimize the NetWorker software. Adience This docment is intended for the host system administrator, system programmer, or operator who will be involved in managing ConGrop. Revision history The following table presents the revision history of this docment. Table 1 Revision history Revision Date Description 01 Jne 18, 2014 First release of this docment for EMC NetWorker 8.2. Related docmentation The following EMC pblications provide additional information: EMC NetWorker Administration Gide Describes how to configre and maintain the NetWorker software. EMC NetWorker Avamar Devices Integration Gide Provides planning and configration information on the se of Avamar devices in a NetWorker environment. EMC NetWorker Clster Installation Gide Describes how to install and administer the NetWorker software on clster servers and clients. EMC NetWorker Updating from a Previos Release Gide Describes how to pdate the NetWorker software from a previosly installed release. EMC NetWorker Release Notes Contains information on new featres and changes, fixed problems, known limitations, environment and system reqirements for the latest NetWorker software release. EMC NetWorker Command Reference Gide Provides reference information for NetWorker commands and options. EMC NetWorker Installation Gide EMC NetWorker 8.2 Performance Optimization Planning Gide 9

10 Preface Explains how to install or pdate the NetWorker software for the clients, console, and server on all spported platforms. EMC NetWorker Cloning Integration Gide Contains planning, practices, and configration information for sing the NetWorker, NMM, and NMDA cloning featre. EMC NetWorker Data Domain Dedplication Devices Integration Gide Provides planning and configration information on the se of Data Domain devices for data dedplication backp and storage in a NetWorker environment. EMC NetWorker Disaster Recovery Gide Contains information abot preparing for a disaster and recovering NetWorker servers, storage nodes, and clients. EMC NetWorker Error Message Gide Provides information on common NetWorker error messages. EMC NetWorker Licensing Gide Provides information abot licensing NetWorker prodcts and featres. EMC NetWorker Management Console Online Help Describes the day-to-day administration tasks performed in the NetWorker Management Console and the NetWorker Administration window. To view Help, click Help in the main men. EMC NetWorker User Online Help The NetWorker User program is the Windows client interface. Describes how to se the NetWorker User program which is the Windows client interface connect to a NetWorker server to back p, recover, archive, and retrieve files over a network. EMC NetWorker Online Software Compatibility Gide Provides a list of client, server, and storage node operating systems spported by the EMC information protection software versions. Yo can access the Online Software Compatibility Gide on the EMC Online Spport site at From the Spport by Prodct pages, search for NetWorker sing "Find a Prodct", and then select the Install, License, and Configre link. EMC NetWorker Secrity Configration Gide Provides an overview of secrity configration settings available in NetWorker, secre deployment, and physical secrity controls needed to ensre the secre operation of the prodct. EMC NetWorker Snapshot Management for NAS Devices Integration Gide Describes how to catalog and manage snapshot copies of prodction data that are created by sing replication technologies on NAS devices. NetWorker VMware Release Integration Gide Describes how to plan and configre VMware and the vstorage API for Data Protection (VADP) within an integrated NetWorker environment. EMC NetWorker SolVe Desktop (formely known and the NetWorker Procedre Generator (NPG) The EMC NetWorker SolVe Desktop (NPG) is a stand-alone Windows application that generates precise ser driven steps for high demand tasks carried ot by cstomers, Spport, and the field. With the NPG, each procedre is tailored and generated based on ser-selectable prompts. This generated procedre: Gathers the most critical parts of the NetWorker prodct gides Combines the advice of the experts in a single docment Provides the content in a standardized format. To access the EMC NetWorker SolVe Desktop, log on to: Yo mst have a valid service agreement to se this site. 10 EMC NetWorker 8.2 Performance Optimization Planning Gide

11 Preface Technical Notes/White Papers Technical Notes and White Papers provide an in-depth technical perspective of a prodct or prodcts as applied to critical bsiness isses or reqirements. Technical Notes and White paper types inclde technology and bsiness considerations, applied technologies, detailed reviews, and best practices planning. Special notice conventions sed in this docment EMC ses the following conventions for special notices: NOTICE Addresses practices not related to personal injry. Note Presents information that is important, bt not hazard-related. Typographical conventions EMC ses the following type style conventions in this docment: Bold Italic Monospace Use for names of interface elements, sch as names of windows, dialog boxes, bttons, fields, tab names, key names, and men paths (what the ser specifically selects or clicks) Use for fll titles of pblications referenced in text Use for: System code System otpt, sch as an error message or script Pathnames, file names, prompts, and syntax Commands and options Monospace italic Monospace bold Use for variables Use for ser inpt [ ] Sqare brackets enclose optional vales Vertical bar indicates alternate selections - the bar means or { } Braces enclose content that the ser mst specify, sch as x or y or z... Ellipses indicate non-essential information omitted from the example Where to get help EMC spport, prodct, and licensing information can be obtained as follows: Prodct information For docmentation, release notes, software pdates, or information abot EMC prodcts, go to EMC Online Spport at Technical spport Go to EMC Online Spport and click Service Center. Yo will see several options for contacting EMC Technical Spport. Note that to open a service reqest, yo mst have a valid spport agreement. Contact yor EMC sales representative for details abot obtaining a valid spport agreement or with qestions abot yor accont. EMC NetWorker 8.2 Performance Optimization Planning Gide 11

12 Preface Online commnities Visit EMC Commnity Network at for peer contacts, conversations, and content on prodct spport and soltions. Interactively engage online with cstomers, partners, and certified professionals for all EMC prodcts. Yor comments Yor sggestions will help s contine to improve the accracy, organization, and overall qality of the ser pblications. Send yor opinions of this docment to DPAD.Doc.Feedback@emc.com 12 EMC NetWorker 8.2 Performance Optimization Planning Gide

13 CHAPTER 1 Overview Introdction NetWorker data flow...14 Overview 13

14 Overview Introdction The NetWorker software is a network storage management application that is optimized for the high-speed backp and recovery operations of large amonts of complex data across entire datazones. This gide addresses non-disrptive performance tning options. Althogh some physical devices may not meet the expected performance, it is nderstood that when a physical component is replaced with a better performing device, another component becomes a bottle neck. This manal attempts to address NetWorker performance tning with minimal disrptions to the existing environment. It attempts to fine-tne featre fnctions to achieve better performance with the same set of hardware, and to assist administrators to: Understand data transfer fndamentals Determine reqirements Identify bottlenecks Optimize and tne NetWorker performance NetWorker data flow The following figres illstrate the backp and recover data flow for components in an EMC NetWorker datazone. The following figres are simplified diagrams, and not all interprocess commnication is shown. There are many other possible backp and recover data flow configrations. Figre 1 NetWorker backp data flow 14 EMC NetWorker 8.2 Performance Optimization Planning Gide

15 Overview Figre 2 NetWorker recover data flow NetWorker data flow 15

16 Overview 16 EMC NetWorker 8.2 Performance Optimization Planning Gide

17 CHAPTER 2 Size the NetWorker Environment This chapter describes how to best determine backp and system reqirements. The first step is to nderstand the environment. Performance isses are often attribted to hardware or environmental isses. An nderstanding of the entire backp data flow is important to determine the optimal performance expected from the NetWorker software. Expectations System components...19 Storage considerations Backp operation reqirements Components of a NetWorker environment Recovery performance factors Connectivity and bottlenecks Size the NetWorker Environment 17

18 Size the NetWorker Environment Expectations Yo can determine the backp performance expectations and reqired backp configrations for yor environment based on the Recovery Time Objective (RTO) for each client. Determine backp environment performance expectations It is important to determine performance expectations, while keeping in mind the environment and the devices sed. Sizing considerations for the backp environment are listed here: Review the network and storage infrastrctre information before setting performance expectations for yor backp environment inclding the NetWorker server, storage nodes, and clients. Review and set the RTO for each client. Determine the backp window for each NetWorker client. List the amont of data to be backed p for each client dring fll and incremental backps. Determine the data growth rate for each client. Determine client browse and retention policy reqirements. Some sggestions to help identify bottlenecks and define expectations are: Create a diagram List all system, storage, network, and target device components List data paths Determining the backp window Mark down the bottleneck component in the data path of each client Connectivity and bottlenecks on page 43 provides examples of possible bottlenecks in the NetWorker environment. It is very important to know how mch down time is acceptable for each NetWorker client. This dictates the recovery time objective (RTO). Review and docment the RTO for each NetWorker client to determine the backp window for each client. Procedre 1. Verify the available backp window for each NetWorker client. 2. List the amont of data that mst be backed p from the clients for fll or incremental backps. 3. List the average daily/weekly/monthly data growth on each NetWorker client. Determining the reqired backp expectations Often it is not possible to constrct a backp image from a fll backp and mltiple incremental backps if the acceptable down time is limited. Fll backps might be reqired more freqently which reslts in a longer backp window. This also increases network bandwidth reqirements. Methods to determine the reqired backp configration expectations for the environment are listed here: 18 EMC NetWorker 8.2 Performance Optimization Planning Gide

19 Size the NetWorker Environment Verify the existing backp policies and ensre that the policies will meet the RTO for each client. Estimate backp window for each NetWorker client based on the information collected. Determine the organization of the separate NetWorker client grops based on these parameters: Backp window Bsiness criticality Physical location Retention policy Ensre that RTO can be met with the backp created for each client. Backps become increasingly expensive as the acceptable downtime/backp window decreases. System components Every backp environment has a bottleneck. It may be a fast bottleneck, bt the bottleneck will determine the maximm throghpt obtainable in the system. Backp and restore operations are only as fast as the slowest component in the backp chain. Performance isses are often attribted to hardware devices in the datazone. This gide assmes that hardware devices are correctly installed and configred. This section discsses how to determine reqirements. For example: How mch data mst move? What is the backp window? How many drives are reqired? How many CPUs are reqired? Devices on backp networks can be groped into for component types. These are based on how and where devices are sed. In a typical backp network, the following for components are present: System Storage Network Target device System CPU reqirements Several components impact performance in system configrations: CPU Memory System bs (this determines the maximm available I/O bandwidth) Determine the optimal nmber of CPUs reqired, if 5 MHz is reqired to move 1 MB of data from a sorce device to a target device. For example, a NetWorker server, or storage System components 19

20 Size the NetWorker Environment node backing p to a local tape drive at a rate of 100 MB per second, reqires 1 GHz of CPU power: 500 MHz is reqired to move data from the network to a NetWorker server or storage node. 500 MHz is reqired to move data from the NetWorker server or storage node to the backp target device. NOTICE 1 GHz on one type of CPU does not directly compare to a 1 GHz of CPU from a different vendor. The CPU load of a system is impacted by many additional factors. For example: High CPU load is not necessarily a direct reslt of insfficient CPU power, bt can be a side effect of the configration of the other system components. Drivers: Be sre to investigate drivers from different vendors as performance varies. Drivers on the same operating system achieve the same throghpt with a significant difference in the amont of CPU sed. Disk drive performance: On a backp server with 400 or more clients in /nsr, a heavily sed disk drive often reslts in CPU se of more than 60 percent. The same backp server in /nsr on a disk array with low tilization, reslts in CPU se of less than 15 percent. On UNIX, and Windows if a lot of CPU time is spent in privileged mode or if a percentage of CPU load is higher in system time than ser time, it often indicates that the NetWorker processes are waiting for I/O completion. If the NetWorker processes are waiting for I/O, the bottleneck is not the CPU, bt the storage sed to host NetWorker server. On Windows, if a lot of time is spent on Deferred Procedre Calls it often indicates a problem with device drivers. Monitor CPU se according to the following classifications: User mode System mode Hardware component interrpts case high system CPU se reslting poor performance. If the nmber of device interrpts exceed 10,000 per second, check the device. Memory reqirements It is important to meet the minimm memory reqirements to ensre that memory is not a bottleneck dring backp operations. The following table lists the minimm memory reqirements for the NetWorker server. Table 2 Minimm reqired memory for the NetWorker server Nmber of clients Minimm reqired memory Less than 50 4 GB GB 20 EMC NetWorker 8.2 Performance Optimization Planning Gide

21 Size the NetWorker Environment Table 2 Minimm reqired memory for the NetWorker server (contined) Nmber of clients Minimm reqired memory More than GB 1024 parallelism vale Based on performance observations for savegrops similar to the following configration, the minimm reqired memory is 1 GB on Linx, MB on Solaris, and MB on Windows: 25 clients Client parallelism = 8 5 remote storage nodes with 32 AFTD devices Defalt device Target sessions (4) Defalt device Max sessions (32) NOTICE Monitor the pagefile or swap se Windows 2003 considerations Client Direct attribte for direct file access (DFA) Increasing the server parallelism vale affects the NetWorker server IOPS on the index and media databases. Memory paging shold not occr on a dedicated backp server as it will have a negative impact on performance in the backp environment. There are recommendations specific to the Windows 2003 server: By defalt, Windows bit servers allocate 2 GB of memory to both kernel mode and application mode processes. Allocate additional memory for the NetWorker software to increase performance. Microsoft Knowledge Base Article provides more information. If paging is necessary, a maximm pagefile size of 1.5 times the amont of physical RAM installed on the Windows server is recommended. Microsoft Knowledge Base Article provides more information. There are many conditions to consider when enabling DFA by sing the Client Direct attribte. The following are the considerations for enabling DFA by sing the Client Direct attribte: Ensre there is enogh CPU power on the client to take advantage of DFA-DD increased performance capability. In most cases, Client Direct significantly improves backp performance. The DFA-DD backp reqires approximately 2-10% more CPU load for each concrrent session. Each save session sing DFA-DD reqires p to 70 MB of memory. If there are 10 DFA streams rnning, then the memory reqired on a client for all DFA sessions is 700 MB. Memory reqirements 21

22 Size the NetWorker Environment Save sessions to DFA-AFTD se less memory and CPU cycles as compared to backp rnning to DFA-DD sing Boost. Save sessions sing DFA-AFTD se only slightly more memory and CPU cycles as compared to traditional saves with mmd. System bs reqirements Bs performance criteria System bs considerations System bs recommendations Althogh HBA/NIC placement are critical, the internal bs is probably the most important component of the operating system. The internal bs provides commnication between internal compter components, sch as CPU, memory, disk, and network. Bs performance depends on several factors: Type of bs Data width Clock rate Motherboard There are considerations to note that concern the bs performance. A faster bs does not garantee faster performance Higher end systems have mltiple bses to enhance performance The bs is often the main bottleneck in a system It is recommended to se PCIeXpress for both servers and storage nodes to redce the chance for I/O bottlenecks. NOTICE PCI-X and PCIeXpress considerations Avoid sing old bs types or high speed components optimized for old bs type as they generate too many interrpts casing CPU spikes dring data transfers. There are considerations that specifically concern the PCI-X and PCIeXpress bses. PCI-X is a half-dplex bi-directional 64-bit parallel bs. PCI-X bs speed may be limited to the slowest device on the bs, be carefl with card placement. PCIeXpress is fll-dplex bi-directional serial bs sing 8/10 encoding. PCIeXpress bs speed may be determined per each device. Do not connect a fast HBA/NIC to a slow bs, always consider bs reqirements. Silent packet drops can occr on a PCI-X GbE NIC, and bs reqirements cannot be met. Hardware that connects fast storage to a slower HBA/NIC will slow overall performance. The component 70 percent rle on page 36 provides details on the ideal component performance levels. 22 EMC NetWorker 8.2 Performance Optimization Planning Gide

23 Size the NetWorker Environment Bs speed reqirements Reqired bs speeds are based on the Fibre Channel size: 4 Gb Fibre Channel reqires 425 MB/s 8 Gb Fibre Channel reqires 850 MB/s 10 GB Fibre Channel reqires 1,250 MB/s Bs specifications Bs specifications are based on bs type, MHz, and MB per second. Bs specifications for specific bses are listed in the following table. Table 3 Bs specifications Bs type MHz MB/second PCI 32-bit PCI 64-bit PCI 32-bit PCI 64-bit PCI 64-bit PCI-X ,067 PCI-X ,134 PCI-X ,268 PCIeXpress 1.0 x PCIeXpress 1.0 x PCIeXpress 1.0 x 4 1,000 PCIeXpress 1.0 x 8 2,000 PCIeXpress 1.0 x 16 4, 000 PCIeXpress 1.0 x 32 8,000 PCIeXpress 2.0 x 8 4,000 PCIeXpress 2.0 x 16 8,000 PCIeXpress 2.0 x 32 16,000 Storage considerations There are components that impact the performance of storage configrations: Storage connectivity: Local verss SAN attached verss NAS attached. Use of storage snapshots. The type of snapshot technology sed determines the read performance. Storage considerations 23

24 Size the NetWorker Environment Storage replication: Some replication technologies add significant latency to write access which slows down storage access. Storage type: Serial ATA (SATA) compter bs is a storage-interface for connecting host bs adapters to storage devices sch as hard disk drives and optical drives. Fibre Channel (FC) is a gigabit-speed network technology primarily sed for storage networking. Flash is a non-volatile compter storage sed for general storage and the transfer of data between compters and other digital prodcts. I/O transfer rate of storage: I/O transfer rate of storage is inflenced by different RAID levels, where the best RAID level for the backp server is RAID1 or RAID5. Backp to disk shold se RAID3. Schedled I/O: If the target system is schedled to perform I/O intensive tasks at a specific time, schedle backps to rn at a different time. I/O data: Raw data access offers the highest level of performance, bt does not logically sort saved data for ftre access. File systems with a large nmber of files have degraded performance de to additional processing reqired by the file system. Compression: If data is compressed on the disk, the operating system or an application, the data is decompressed before a backp. The CPU reqires time to re-compress the files, and disk speed is negatively impacted. Storage IOPS reqirements The file system sed to host the NetWorker data (/nsr) mst be a native file system spported by the operating system vendor for the nderlying operating system and mst be flly Posix compliant. If the storage performance reqirements measred in I/O operations per second (IOPS) docmented in this section are not met, NetWorker server performance is degraded and can be nresponsive for short periods of time. If storage performance falls below 50% of the desired IOPS reqirements: NetWorker server performance can become nreliable NetWorker server can experience prolonged nresponsive periods Backp jobs can fail NetWorker server reqirements, with respect to storage performance are determined by the following: NetWorker datazone monitoring Backp jobs Maintenance tasks Reporting tasks Manal tasks 24 EMC NetWorker 8.2 Performance Optimization Planning Gide

25 Size the NetWorker Environment NetWorker server and storage node disk write latency It is important to determine the reqirements for the NetWorker server and the storage node write latency. Write latency for /nsr on NetWorker servers, and storage nodes is more critical for the storage hosting /nsr than is the overall bandwidth. This is becase NetWorker ses a very large nmber of small random I/O for internal database access. The following table lists the effects on performance for disk write latency dring NetWorker backp operations. Table 4 Disk write latency reslts and recommendations Disk write latency in milliseconds (ms) Effect on performance Recommended 25 ms and below Stable backp performance Yes Optimal backp speeds 50 ms Slow backp perfromance (the NetWorker server is forced to throttle database pdates) No Delayed & failed NMC pdates 100 ms Failed savegrops and sessions No ms Delayed NetWorker daemon lanch No Unstable backp performance Unprepared volmes for write operations Unstable process commnication NOTICE Avoid sing synchronos replication technologies or any other technology that adversely impacts latency. Recommended server and storage node disk settings It is important to consider recommendations for optimizing NetWorker server and storage node disk performance: For NetWorker servers nder increased load (nmber of parallel sessions occrring dring a backp exceeds 100 sessions), dedicate a fast disk device to host NetWorker databases. For disk storage configred for the NetWorker server, se RAID-10. For large NetWorker servers with server parallelism higher than 400 parallel sessions, split the file systems sed by the NetWorker server. For example, split the /nsr folder from a single mont to mltiple mont points for: /nsr /nsr/res /nsr/index /nsr/mm For NDMP backps on the NetWorker server, se a separate location for /nsr/tmp folder to accommodate large temporary file processing. NetWorker server and storage node disk write latency 25

26 Size the NetWorker Environment Use the operating system to handle parallel file system I/O even if all mont points are on the same physical location. The operating system handles parallel file system I/O more efficiently than the NetWorker software. Use RAID-3 for disk storage for AFTD. For antivirs software, disable scanning of the NetWorker databases. If the antivirs software is able to scan the /nsr folder, performance degradation, time-ots, or NetWorker database corrption can occr becase of freqent file open/close reqests. The antivirs exclde list shold also inclde NetWorker storage node locations sed for Advanced File Type Device (AFTD). Disabled antivirs scanning of specific locations might not be effective if it incldes all locations dring file access, despite the exclde list if it skips scanning previosly accessed files. Contact the specific vendor to obtain an pdated version of the antivirs software. For file caching, aggressive file system caching can case commit isses for: The NetWorker server: all NetWorker databases can be impacted (nsr\res, nsr \index, nsr\mm). The NetWorker storage node: When configred to se Advanced File Type Device (AFTD). Be sre to disable delayed write operations, and se driver Flsh and Write- Throgh commands instead. Disk latency considerations for the NetWorker server are higher than for typical server applications as NetWorker tilizes committed I/O: Each write to the NetWorker internal database mst be acknowledged and flshed before next write is attempted. This is to avoid any potential data loss in internal databases. These are considerations for /nsr in cases where storage is replicated or mirrored: Do not se software based replication as it adds an additional layer to I/O throghpt and cases nexpected NetWorker behavior. With hardware based replication, the preferred method is asynchronos replication as it does not add latency on write operations. Do not se synchronos replication over long distance links, or links with nongaranteed latency. SANs limit local replication to 12 km and longer distances reqire special handling. Do not se TCP networks for synchronos replication as they do not garantee latency. Consider the nmber of hops as each hardware component adds latency. Storage performance recommendations The same physical storage sb-system can perform differently depending on the configration. For example, splitting a single NetWorker mont point (/nsr) into mltiple mont points can significantly increase performance de to the parallelism of the file system handler in the operating system. The NetWorker software does not se direct I/O, bt it does isse a sync reqest for each write operation to ensre data is flshed on the disk to avoid data loss in the event of a system failre (otherwise known as committed I/O writes). Therefore write caching on the operating system has minimal, or no impact. However, hardware-based write-back cache can significantly improve NetWorker server performance. Processes can be single threaded or mlti-threaded (depending on process itself and whether or not it is configrable), bt I/O is always blocking-io (MMDB, RAP) to provide 26 EMC NetWorker 8.2 Performance Optimization Planning Gide

27 Size the NetWorker Environment I/O Pattern Considerations optimal data protection. The exception is the indexdb where each client has its own I/O stream. General recommendations for NetWorker server metadata storage are groped depending on the NetWorker database type: The RAP database is file based with fll file read operations with an average I/O of > 1 KB. The MMDB is block based with a fixed block size of 32 KB with many read operations and fewer write operations. Set separate mont points for each database on the flash drive to avoid I/O bottlenecks on the NetWorker server pdate to the RAP, MMDB and Index database. The indexdb is primarily based on seqential write operations with no fixed block size and few read operations. A lower storage tier sch as SAS or SATA based storage is sfficient for the indexdb. The temporary NetWorker folder (/nsr/tmp) is sed heavily dring index merge operations for NDMP backps. The temporary folder shold reside on higher tier storage, sch as FC drives. The NetWorker I/O pattern for access to configration and metadata databases varies depending on the database and its se. However, it generally incldes certain elements: Normal backp operations: 80% write / 20% read Cross-check operations: 20% write / 80% read Reporting operations: 100% read Based on this, the daily cross-check shold be performed otside of the primary backp window. Also, external soltions that provide reporting information shold be configred to avoid creating excessive loads on the NetWorker metadata databases dring the prodction backp window. I/O block size also varies depending on database and se-case, bt generally its ronded to 8KB reqests. NetWorker datazone monitoring recommendations Storage mst provide a minimm of 30 IOPS to the NetWorker server. This nmber increases as the NetWorker server load increases. Backp operation reqirements Reqirements for starting and rnning backp operations is the largest portion of the NetWorker software workload: Depending on the load, add to the IOPS reqirements the maximm concrrent sessions on the NetWorker server, and divide this nmber by 3. The maximm NetWorker server parallelism is 1024, therefore the highest possible load is 1024/3=340 IOPS. IOPS reqirements increase if the NetWorker software mst perform both index and bootstrap backps at the same time. In this case, add: 50 IOPS for small servers 150 IOPS for medim servers 400 IOPS for large servers Backp operation reqirements 27

28 Size the NetWorker Environment Manal NetWorker server task reqirements on page 32 provides gidelines for small, medim, and large NetWorker servers. Add the additional IOPS only if the bootstrap backp rns concrrently with the normal backp operations. If the bootstrap backp is configred to rn when the NetWorker server is idle, the IOPS reqirements do not increase. IOPS reqirements increase if the NetWorker software is configred to start a large nmber of jobs at the same time. To accommodate load spikes, add 1 IOPS for each parallel session that is started. It is recommended not to start more than 40 clients per grop with the defalt client parallelism of 4. The reslt is 160 IOPS dring grop startp. Starting a large nmber of clients simltaneosly can lead to I/O system starvation. Each volme reqest reslts in a short I/O brst of approximately 200 IOPS for a few seconds. For environments rnning a small nmber of volmes the effect is minimal. However, for environments with freqent mont reqests, a significant load is added to the NetWorker server. In this case, add 100 IOPS for high activity (more than 50 mont reqests per hor). To avoid the excessive load, se a smaller nmber of large volmes. NDMP backps add additional load de to index post-processing For large NDMP environment backps with more than 10 million files, add an additional 120 IOPS. NetWorker kernel parameter reqirements Create a separate startp script for the NetWorker servers with heavy loads by enabling the following environment variables before the NetWorker services start: tcp_backlog_qee: Add the appropriate kernel parameters in the startp script based on the operating system Open file descriptors: Change the open file descriptors parameter to a minimm of 8192 reqired on NetWorker servers with a heavy load NOTICE Parallel save stream considerations Use the defalt startp script on the NetWorker storage nodes and clients. The tcp_backlog_qee, and the open file descriptor parameters are not reqired on storage nodes and clients. In NetWorker 8.1 and later, the parallel save streams (PSS) featre provides the ability for each Client resorce save set entry to be backed p by mltiple parallel save streams to one or more destination backp devices. The save set entry is also called a save point, which is often a UNIX file system mont directory, or Windows volme drive letter. Significant parallel performance gains are possible dring PSS backp and sbseqent recovery. The following table lists the items spported for PSS. 28 EMC NetWorker 8.2 Performance Optimization Planning Gide

29 Size the NetWorker Environment Table 5 PSS spport for NetWorker 8.1.x and 8.2 NetWorker release Operating systems Spported save sets Spported backp types Virtal and non-virtal synthetic fll Checkpoint Restart 8.1, 8.1 SP1 UNIX, Linx ALL, individal save points Schedled No No 8.2 UNIX, Linx, Windows ALL, individal save points inclding, Disaster_Recovery, dedplicated, and CSV volmes (Windows only) Schedled Yes No When a PSS enabled UNIX Client resorce's parallelism vale is greater than the resorce's nmber of save points, the schedled backp save grop process divides the parallelism among the save points and starts PSS save processes for all the save points at approximately the same time. However, this is done within the limits of the following: The NetWorker server Grop parallelism controls Media device session availability It is recommended to set the Client resorce PSS parallelism vale to 2x or more the nmber of save points. The nmber of streams for each PSS save point is determined before the backp from its client parallelism vale and it remains fixed throghot the backp. It is a vale from 1 throgh 4 (maximm), where 1 indicates a single stream with a separate PSS process that traverses the save point's file system to determine the files to back p. The separation of processes for streaming data and traversing the file system can improve performance. Also, the nmber of save processes that rn dring a PSS save point backp is eqal to the nmber of save stream processes assigned with two additional save processes for both the director and file system traversal processes. The defalt maximm nmber of 4 streams for each PSS save point can be modified to a vale from 1 throgh 8 inclsive by setting the NSR_SG_PSS_MAX_SP_SPLIT=<vale 1-8> environment variable in both UNIX and Windows. After setting the environment variable, restart the NetWorker services for the changes to take effect. Increasing the defalt maximm vale can improve the performance for clients with very fast disks. When the client parallelism is less than its nmber of save points, some save point backps rn in PSS mode, with only a single stream. Other save points rn in the defalt mode (non-pss). Therefore, for consistent se of PSS, set the client parallelism to 2x or more the nmber of save points. This ensres mltiple streams for each save point. It is recommended that large, fast file systems that shold benefit from PSS be pt in a new separate PSS-enabled Client resorce that is schedled separately from the client's other save points. Separate schedling is achieved by sing two different save grops with different rn times, bt the same save grop can be sed if yo avoid client disk parallel read contention. Also, se cation when enabling PSS on a single Client resorce with the keyword "All". "All" typically expands to inclde mltiple small operating file systems that reside on the same installation disk(s). These file systems sally do not benefit from PSS bt instead might waste valable PSS mlti-streaming resorces. Based on the second example, the "/sp1" save set record is referred to as the master and its save set time is sed in browsing and time-based recover operations. It references the two related records (dependents) throgh the "*mbs dependents" attribte. This attribte Parallel save stream considerations 29

30 Size the NetWorker Environment lists the portable long-format save set IDs of the dependents. Each dependent indirectly references its master throgh save set name and save time associations. Its master is the save set record with the next highest save time and save set name with no prefix. Also, each master record has an "*mbs anchor save set time" attribte, which references its dependent with the earliest save set time. PSS improves on manally dividing save point /sp1, into mltiple sb-directories, "/sp1/ sbdira", "/sp1/sbdirb"... and entering each sb-directory separately in the Client resorce. PSS eliminates the need to do this and atomatically performs better load balancing optimization at the file-level, rather than at the directory level sed in the manal approach. PSS creates psedo sb-directories corresponding to the media save set record names, e.g. "/sp1", <1>/sp1" & "<2>/sp1". Both time-based recovery and save grop cloning atomatically aggregate the mltiple physical save sets of a save point PSS backp. The mltiple physical dependent save sets remain hidden. However, there is no atomatic aggregation in save set based recovery, scanner, nsrmm, or nsrclone -S manal command line sage. The -S command option reqires the PSS save set IDs of both master and dependents to be specified at the command line. However, the -S option shold rarely be reqired with PSS. When the following PSS client configration settings are changed, the nmber of save streams can change for the next save point incremental backp: The nmber of save points The parallelism vale NetWorker atomatically detects differences in the nmber of save streams and resets the backp to a level Fll accordingly. This starts a new <fll, incr, incr, > seqence of backps with the same nmber of media database save sets for each PSS save point backp. This applies to non-fll level nmbers 1-9 in addition to incremental, which is also known as level 10. NOTICE The PSS incremental backp of a save point with zero to few files changed since its prior backp will reslt in one or more empty media database save sets (actal size of 4 bytes), which is to be expected. Example 1 The following provides performance configration alternatives for a PSS enabled client with the following backp reqirements and constraints: 2 savepoints /sp200gb and /sp2000gb Save streams able to back p at 100GB/hr Client parallelism is set to 4 (No more than 4 concrrent streams to avoid disk IO contention) Based on these reqirements and constraints, the following are specific configration alternatives with the overall backp time in hors: A non-pss Client resorce with both savepoints at 1 stream each: 20 hors A single PSS Client resorce with both /sp200gb at 2 streams and /sp2000gb at 2 streams for the same save grop: 10 hors A non-pss Client resorce with /sp200gb at 1 stream and a PSS Client resorce with /sp2000gb at 3 streams for the same client host and same save grop: 6.7 hors 30 EMC NetWorker 8.2 Performance Optimization Planning Gide

31 Size the NetWorker Environment A PSS Client resorce with /sp200gb at 4 streams and another PSS Client resorce with /sp2000gb at 4 streams for the same client bt different seqentially schedled save grops: 5.5 hors aggregate Example 2 Example 3 With client parallelism set to 8 and three save points /sp1, /sp2 and /sp3 explicitly listed or expanded by the keyword ALL for UNIX, the nmber of PSS streams for each savepoint backp is 3, 3, and 2 respectively. The nmber of mminfo media database save set records is also 3, 3, and 2 respectively. For a given save point, /sp1, mminfo and NMC save set qery reslts shows three save set records each named "/sp1", "<1>/sp1" and "<2>/sp1". These related records have niqe save times that are close to one another. The "/sp1" record always has the latest save time, that is, maximm save time, as it starts last. This makes time-based recovery aggregation for the entire save point /sp1 work atomatically. For a PSS Windows save point backp, the nmber of streams per save point is estimated in the following two scenarios: The client parallelism per save point, where client parallelism=5, and the nmber of save points=2, the nmber of PSS streams is 3 for the first save point, and 2 streams for the second. For the save set ALL, with two volmes and client parallelism=5, each volme (save point) gets 2 streams. Using client parallelism=4, every save point is given 2 save streams. Both Disaster_Recovery volmes, C:\, and D:\ are given 2 streams also. For the save set ALL, the Disaster_Recovery save set is considered to be a single save point. For this example, the system has C:\, D:\, and E:\, where C:\, and D:\ are the critical volmes that make p the Disaster_Recovery save set. The save operation controls how the save points are started, and the total nmber of streams never exceeds the client parallelism vale of 4. Command line examples Consider the command line examples for after a PSS backp for the example UNIX save point "/sp1" (Windows is similar): To view the consolidated job log file information following a schedled backp of / sp1: # tail /nsr/logs/sg/<save grop name>/<job#> parallel save streams partial completed savetime= parallel save streams partial completed savetime= parallel save streams partial completed savetime= parallel save streams smmary test.xyx.com: /sp1 level=fll, 311 MB 00:00: files parallel save streams smmary savetime= To list only the master save sets for all /sp1 fll/incr backps: # mminfo -ocntr -N "/sp1"-r "client,name,level,nsavetime,savetime(25),ssid,ssid(53),totalsize,n files,attrs" To atomatically aggregate "<i>/sp1" with "/sp1" savesets for browse time-based save point recovery: # recover [-t <now or earlier_master_ss_time] [-d reloc_dir] [- a] /sp1 Parallel save stream considerations 31

32 Size the NetWorker Environment The following are considerations and recommendations for benefitting from the PSS performance enhancements: The PSS featre boosts backp performance by splitting the save point for PSS into mltiple streams based on client parallelism. The fairly eqal distribtion of directory and file sizes in save sets adds additional performance benefit from PSS. Large save sets residing on storage with sfficiently high aggregate throghpt from concrrent read streams perform significantly better with PSS. Avoid sing slow storage with high disk read latency with PSS. Ensre the target devices are fast enogh to avoid write contentions or target device qeing since PSS splits a single save point into mltiple save streams. If the target device is Data Domain, ensre PSS does not satrate the max sessions allowable limit on the DDR. Each Boost device allows a maximm of 60 NetWorker concrrent sessions. Internal maintenance task reqirements Reporting task reqirements Reqirements for completing maintenance tasks can add significant load to the NetWorker software: Daily index and media database consistency checks adds 40 IOPS for small environments, and p to 200 IOPS for large environments with more than 1,000 configred clients. Environments with very long backp and retention times (1 year or more) experience large internal database growth reslting in additional reqirements of p to 100 to 200 IOPS. Prge operations can take 30 IOPS for small environments with p to 1000 backp jobs per day, 100 IOPS for mid-size environments and p to 200 IOPS for large environments with high loads of 50,000 jobs per day. Monitoring tools like the NMC server, DPA, cstom reporting, or monitoring scripts contribte to additional load on the NetWorker server: Procedre For each NMC server, add an additional 100 IOPS. For DPA reporting, add an additional 250 IOPS. Reslts Manal NetWorker server task reqirements Cstomer reporting or monitoring scripts can contribte significant load depending on the design. For example, continos reporting on the NetWorker index and media databases can add p to 500 IOPS. Manal tasks on the NetWorker server can add additional load: 32 EMC NetWorker 8.2 Performance Optimization Planning Gide Each recover session that mst enmerate objects on the backp server adds additional load to the NetWorker server. For example, to flly enmerate 10,000 backp jobs, the NetWorker server can reqire p to 500 IOPS. For spikes, and nrelated operating system workloads, the total nmber of calclated IOPS shold be increased by 30%. Single disk performance is often insfficient for large NetWorker servers. The following table provides information on single disk performance. To achieve higher

33 Size the NetWorker Environment IOPS, combine mltiple disks for parallel access. The best performance for standard disks is achieved with RAID 0+1. However, modern storage arrays are often optimized for RAID 5 access for random workloads on the NetWorker server. Hardware-based write-back cache can significantly improve NetWorker server performance. The following table provides gidelines on the NetWorker server IOPS reqirements. Table 6 Reqired IOPS for NetWorker server operations Type of operation Small NetWorker environment (1) Medim NetWorker environment (2) on page 32 Large NetWorker environment (3) on page 32 Concrrent backps Bootstrap backps Backp grop startp Volme management Large NDMP backps Standard daily maintenance tasks Large internal database maintenance Prge operations NMC reporting DPA reporting Recovery (1) A small NetWorker server environment is considered to have less than 100 clients, or 100 concrrent backp sessions. (2) A medim NetWorker server environment is considered to have more than 100, and p to 400 clients or 250 concrrent backp sessions. (3) A large NetWorker server environment is considered to have more than 400 clients, or 500 concrrent backp sessions. IOPS considerations There are considerations and recommendations for IOPS vales: The NetWorker software does not limit the nmber of clients per datazone, bt a maximm of 1000 clients is recommended de to the complexity of managing large datazones, and the increased hardware reqirements on the NetWorker server. NOTICE As the I/O load on the NetWorker server increases, so does the storage layer service time. If service times exceed the reqired vales there is a direct impact on NetWorker server performance and reliability. Information on the reqirements for maximm service times are available in NetWorker server and storage node disk write latency on page 25. The NetWorker server performs the data movement itself, (if the backp device resides on the server rather than the NetWorker storage node) the backp performance is directly impacted. Internal maintenance task reqirements 33

34 Size the NetWorker Environment Example 3 : on page 34, and Example 4 : on page 34 are based on the previosly listed reqirements. Small to medim NetWorker datazone: Optimized: 200 clients rnning in parallel with these characteristics: 100 jobs with p to 1,000 backp jobs per day. backps spread over time. no external reporting. no overlapping maintenance tasks. Minimm reqired IOPS: 200, recommended IOPS: 400 Non-optimized: the same workload, however: most backp jobs start at the same time. prodction backps overlap bootstrap and maintenance jobs. additional reporting is present. Minimm reqired IOPS: 800, recommended IOPS Large NetWorker datazone: Optimized: 1000 clients rnning in parallel with these characteristics: 500 jobs with p to 50,000 backp jobs per day. backps spread over time. backps sing backp to disk, or large tape volmes. no external reporting. no overlapping maintenance tasks. Minimm reqired IOPS: 800, recommended IOPS: 1000 Non-optimized: the same workload, however: most backp jobs start at the same time. IOPS vales for disk drive technologies a large nmber of small volmes is sed. prodction backps overlap bootstrap and maintenance jobs. additional reporting is present. Minimm reqired IOPS: 2000, recommended IOPS: 2500 This example identifies that the difference in NetWorker configration can reslt in p to a 250% additional load on the NetWorker server. Also, the impact on sizing is sch that well-optimized large environments perform better than non-optimized medim environments. The disk drive type determines the IOPS vales for randoms small blocks and seqential large blocks. The following table lists disk drive types and their corresponding IOPS vales. Table 7 Disk drive IOPS vales Disk drive type Enterprise Flash Drives (EFD) Vales per device 2500 IO/s for random small block IOs or 100 MB/s seqential large blocks 34 EMC NetWorker 8.2 Performance Optimization Planning Gide

35 Size the NetWorker Environment Table 7 Disk drive IOPS vales (contined) Disk drive type Fibre Channel drives (FC drives (15k RPM)) FC drives (10K RPM) SATA2 or LCFC (7200 RPM) SATA drives (7200 RPM) PATA drives (5400 RPM) Vales per device 180 IO/s for random small block IOs or 12 MB/s seqential large blocks 140 IO/s for random small block IOs or 10 MB/s seqential large blocks 80 IO/s for random small block IOs or 8 MB/s seqential large blocks 60 IO/s for random small block IOs or 7 MB/s seqential large blocks 40 IO/s for random small block IOs or 7 MB/s seqential large blocks File history processing File history is processed by NDMP at the end of the backp operation, rather than dring the backp. This reslts in perceived long idle times. The actal file history processing time is linear despite the nmber of files in the dataset. However, the processing time depends on other storage system factors, sch as: The RAID type The nmber of disks being configred The cache size The type of file system for hosting /nsr/index and /nsr/tmp NOTICE The expected reslts are approximately 20 mintes per each 10 million files. File history processing creates a significant I/O load on the backp server, and increases IOPS reqirements by I/O operations per second dring processing. If minimm IOPS reqirements are not met, file history processing can be significantly slower. Network Several components impact network configration performance: IP network: A compter network made of devices that spport the Internet Protocol to determine the sorce and destination of network commnication. Storage network: The system on which physical storage, sch as tape, disk, or file system resides. Network speed: The speed at which data travels over the network. Network bandwidth: The maximm throghpt of a compter network. Network path: The commnication path sed for data transfer in a network. Network 35

36 Size the NetWorker Environment Network concrrent load: The point at which data is placed in a network to ltimately maximize bandwidth. Network latency: The measre of the time delay for data traveling between sorce and target devices in a network. Target device Storage type and connectivity have the types of components that impact performance in target device configrations. Storage type: Raw disk verss Disk Appliance: Raw disk: Hard disk access at a raw, binary level, beneath the file system level. Disk Appliance: A system of servers, storage nodes, and software. Physical tape verss Virtal tape library: VTL presents a storage component (sally hard disk storage) as tape libraries or tape drives for se as storage medim with the NetWorker software. Physical tape is a type of removable storage media, generally referred to as a volme or cartridge, that contains magnetic tape as its medim. Connectivity: Local, SAN-attached: A compter network, separate from a LAN or WAN, designed to attach shared storage devices sch as disk arrays and tape libraries to servers. IP-attached: The storage device has its own niqe IP address. The component 70 percent rle Manfactrer throghpt and performance specifications based on theoretical environments are rarely, or never achieved in real backp environments. It is a best practice to never exceed 70 percent of the rated capacity of any component. Components inclde: CPU Disk Network Internal bs Memory Fibre Channel Performance and response time significantly decrease when the 70 percent tilization threshold is exceeded. The physical tape drives, and solid state disks are the only exception to this rle, and shold be sed as close to 100 percent as possible. Neither the tape drives, nor the solid state disks sffer performance degradation dring heavy se. 36 EMC NetWorker 8.2 Performance Optimization Planning Gide

37 Size the NetWorker Environment Components of a NetWorker environment A NetWorker datazone is constrcted of several components. The following figre illstrates the main components in a NetWorker environment. The components and technologies that make p a NetWorker environment are listed below. Figre 3 NetWorker datazone components Datazone A datazone is a single NetWorker server and its client compters. Additional datazones can be added as backp reqirements increase. NOTICE NetWorker Management Console Components that determine NMC performance It is recommended to have no more than 1500 clients or 3000 client instances per NetWorker datazone. This nmber reflects an average NetWorker server and is not a hard limit. The NetWorker Management Console (NMC) is sed to administer the backp server and it provides backp reporting capabilities. The NMC often rns on the backp server, and adds significant load to the backp server. For larger environments, it is recommended to install NMC on a separate compter. A single NMC server can be sed to administer mltiple backp servers. Some components determine the performance of NMC: TCP network connectivity to backp server: All commnication between NMC and NW server is over TCP and sch high-speed low-latency network connectivity is essential. Memory: Database tasks in larger environments are memory intensive, make sre that NMC server is eqipped with sfficient memory. Components of a NetWorker environment 37

38 Size the NetWorker Environment CPU: If the NMC server is sed by mltiple sers, make sre that it has sfficient CPU power to ensre that each ser is given enogh CPU time slices. Minimm system reqirements for the NMC server memory, Available disk space and JRE with Web Start mst meet specific minimm reqirements for the NMC server: Memory: A minimm 1 GHz with 512 MB of RAM is reqired. Add an additional 512 MB RAM to rn reports. Available disk space: Dal core 2 GHz and 2 GB of RAM with a bffer of disk space for a large Console database with mltiple sers. JRE with Web Start: 55 MB, and as the nmber of NetWorker monitored servers increases, increase the processor capabilities: For 50 servers: Dal 1 GHz with no less than 2 GB RAM For 100 servers: Dal 1 GHz with no less than 4 GB RAM For 200 servers: Qad 1 GHz with no less than 8 GB RAM Console database Use formlas to estimate the size and space reqirements for the Console database: Formla for estimating the size of the NetWorker Management Console database The Console server collects data from the NetWorker servers in the enterprise, and stores the data in its local Console database. By defalt, the database is installed on the local file system that can provide the most available space. The Console integrates and processes this information to prodce reports that facilitate trend analysis, capacity planning, and problem detection. The NetWorker administrator gide provides information abot reports. To store the collected data, allocate sfficient disk space for the Console database. Several factors affect the amont of disk space reqired: The nmber of NetWorker servers monitored for the reports The nmber of savegrops rn by each of those servers The freqency with which savegrops are rn The length of time report data is saved (data retention policies) NOTICE Since the amont of reqired disk space is directly related to the amont of historical data stored, the reqirements can vary greatly, on average between 0.5 GB and several GB. Allow for this when planning hardware reqirements. Formlas for estimating the space reqired for the Console database information There are existing formlas sed to estimate the space needed for different types of data and to estimate the total space reqired. 38 EMC NetWorker 8.2 Performance Optimization Planning Gide

39 Size the NetWorker Environment Save set media database To estimate the space needed for the save set media database, mltiply the weekly amont of save sets by the nmber of: NetWorker servers monitored by the Console Weeks in the Save Set Otpt policy The reslt indicates the length of time that a save set took to rn sccessflly. The reslts also identify the nmber of files that were backed p, and how mch data was saved dring the operation. Save set otpt To estimate the space needed for the save set media database, mltiply the weekly amont of otpt messages by the nmber of: NetWorker servers monitored by the Console Save Set Otpt Retention policy The reslt indicates how many grops and save sets were attempted and their sccess or failre. Savegrop completion data To estimate the space needed for the save set media database, mltiply the weekly amont of savegrops by the nmber of: NetWorker servers monitored by the Console Weeks in the Completion Data Retention policy The reslt can be sed to trobleshoot backp problems. NetWorker server NetWorker servers provide services to back p and recover data for the NetWorker client compters in a datazone. The NetWorker server can also act as a storage node and control mltiple remote storage nodes. Index and media management operations are some of the primary processes of the NetWorker server: The client file index tracks the files that belong to a save set. There is one client file index for each client. The media database tracks: The volme name The location of each save set fragment on the physical media (file nmber/file record) The backp dates of the save sets on the volme The file systems in each save set Unlike the client file indexes, there is only one media database per server. The client file indexes and media database can grow to become prohibitively large over time and will negatively impact backp performance. The NetWorker server schedles and qees all backp operations, tracks real-time backp and restore related activities, and all NMC commnication. This information is stored for a limited amont of time in the jobsdb which for real-time operations has the most critical backp server performance impact. NetWorker server 39

40 Size the NetWorker Environment NOTICE Components that determine backp server performance The data stored in this database is not reqired for restore operations. The nsrmmdbd ses CPU intensive operation when thosands of savesets are processed in a single operation. Therefore, cloning operations with large savesets, and any NetWorker maintenance activities shold rn otside of the primary backp window. Some components that determine NetWorker server backp performance are: Use a 64-bit system for the NetWorker server. Use crrent hardware for the NetWorker server. For example, the crrent version of the NetWorker server software will not operate well on hardware bilt more than 10 years ago. Minimize these system resorce intensive operations on the NetWorker server dring heavy loads, sch as a high nmber of concrrent backp/clone/recover streams: nsrim nsrck The disk sed to host the NetWorker server (/nsr): The typical NetWorker server workload is from many small I/O operations. This is why disks with high latency perform poorly despite having peak bandwidth. High latency rates are the most common bottleneck of a backp server in larger environments. Avoid additional software layers as this adds to storage latency. For example, the antivirs software shold be configred with the NetWorker databases (/nsr) in its exclsion list. Plan the se of replication technology careflly as it significantly increases storage latency. Ensre that there is sfficient CPU power for large servers to complete all internal database tasks. Use fewer CPUs, as systems with fewer high performance CPUs otperform systems with nmeros lower performance CPUs. Do not attach a high nmber of high performance tape drives or AFTD devices directly to a backp server. Ensre that there is sfficient memory on the server to complete all internal database tasks. Off-load backps to dedicated storage nodes when possible for clients that mst act as a storage node by saving data directly to backp server. NOTICE NetWorker storage node The system load that reslts from storage node processing is significant in large environments. For enterprise environments, the backp server shold backp only its internal databases (index and bootstrap). A NetWorker storage node can be sed to improve performance by off loading from the NetWorker server mch of the data movement involved in a backp or recovery operation. NetWorker storage nodes reqire high I/O bandwidth to manage the transfer of data transfer from local clients, or network clients to target devices. 40 EMC NetWorker 8.2 Performance Optimization Planning Gide

41 Size the NetWorker Environment Components that determine storage node performance Some components determine storage node performance: Performance of the target device sed to store the backp. Connectivity of the system. For example, a storage node sed for TCP network backps can save data only as fast as it is able to receive the data from clients. I/O bandwidth: Ensre that there is sfficient I/O bandwidth as each storage node ses available system bandwidth. Therefore, the backp performance of all devices is limited by the I/O bandwidth of the system itself. CPU: Ensre that there is sfficient CPU to send and receive large amonts of data. Do not overlap staging and backp operations with a VTL or AFTD soltion by sing ATA or SATA drives. Despite the performance of the array, ATA technology has significant performance degradation on parallel read and write streams. NetWorker client A NetWorker client compter is any compter whose data mst be backed p. The NetWorker Console server, NetWorker servers, and NetWorker storage nodes are also NetWorker clients. NetWorker clients hold mission critical data and are resorce intensive. Applications on NetWorker clients are the primary sers of CPU, network, and I/O resorces. Only read operations performed on the client do not reqire additional processing. Client speed is determined by all active instances of a specific client backp at a point in time. Components that determine NetWorker client performance Some components determine NetWorker client performance: Client backps are resorce intensive operations and impact the performance of primary applications. When sizing systems for applications, be sre to consider backps and the related bandwidth reqirements. Also, client applications se a significant amont of CPU and I/O resorces slowing down backps. If a NetWorker client does not have sfficient resorces, both backp and application performance are negatively impacted. NetWorker clients with millions of files. As most backp applications are file based soltions, a lot of time is sed to process all of the files created by the file system. This negatively impacts NetWorker client backp performance. For example: A fll backp of 5 million 20 KB files takes mch longer than a backp of a half million 200 KB files, althogh both reslt in a 100 GB save set. For the same overall amont of changed data, an incremental/differential backp of one thosand 100 MB files with 50 modified files takes mch less time than one hndred thosand 1 MB files with 50 modified files. Encryption and compression are resorce intensive operations on the NetWorker client and can significantly affect backp performance. Backp data mst be transferred to target storage and processed on the backp server: Client/storage node performance: A local storage node: Uses shared memory and does not reqire additional overhead. NetWorker client 41

42 Size the NetWorker Environment NetWorker databases Optional NetWorker Application Modles Virtal environments NetWorker dedplication nodes Recovery performance factors A remote storage node: Receive performance is limited by network components. Client/backp server load: Does not normally slow client backp performance nless the backp server is significantly ndersized. Several factors determine the size of NetWorker databases. These factors are available in NetWorker database bottlenecks on page 47. NetWorker Application Modles are sed for specific online backp tasks. Additional application-side tning might be reqired to increase application backp performance. The docmentation for the applicable NetWorker modle provides details. NetWorker clients can be created for virtal machines for either traditional backp or VADP. Additionally, the NetWorker software can atomatically discover virtal environments and changes to those environments on either a schedled or on-demand basis and provides a graphical view of those environments. A NetWorker dedplication node is an EMC Avamar server that stores dedplicated backp data. The initial backp to a dedplication node shold be a fll backp. Dring sbseqent backps, the Avamar infrastrctre identifies redndant data segments at the sorce and backs p only niqe segments, not entire files that contain changes. This redces the time reqired to perform backps, as well as both the network bandwidth and storage space sed for backps of the NetWorker Management Console. Recovery performance can be impeded by network traffic, bottlenecks, large files, and other factors. Some considerations for recovery performance are: File-based recovery performance depends on the performance of the backp server, specifically the client file index. Information on the client file index is available in NetWorker server on page 39. The fastest method to recover data efficiently is to rn mltiple recover commands simltaneosly by sing save set recover. For example, 3 save set recover operations provide the maximm possible parallelism given the nmber of processes, the volme, and the save set layot. If mltiple, simltaneos recover operations rn from the same tape, be sre that the tape does not mont and start ntil all recover reqests are ready. If the tape is sed before all reqests are ready, the tape is read mltiple times slowing recovery performance. 42 EMC NetWorker 8.2 Performance Optimization Planning Gide

43 Size the NetWorker Environment Mltiplexing backps to tape slows recovery performance. Connectivity and bottlenecks The backp environment consists of varios devices from system, storage, network, and target device components, with hndreds of models from varios vendors available for each of them. The factors affecting performance with respect to connectivity are listed here: Components can perform well as standalone devices, bt how well they perform with the other devices on the chain is what makes the configration optimal. Components on the chain are of no se if they cannot commnicate to each other. Backps are data intensive operations and can generate large amonts of data. Data mst be transferred at optimal speeds to meet bsiness needs. The slowest component in the chain is considered a bottleneck. In the following figre, the network is nable to gather and send as mch data as that of the components. Therefore, the network is the bottleneck, slowing down the entire backp process. Any single network device on the chain, sch as a hb, switch, or a NIC, can be the bottleneck and slow down the entire operation. Figre 4 Network device bottleneck As illstrated in the following figre, the network is pgraded from a 100 base T network to a GigE network, and the bottleneck has moved to another device. The host is now nable to generate data fast enogh to se the available network bandwidth. System bottlenecks can be de to lack of CPU, memory, or other resorces. Connectivity and bottlenecks 43

44 Size the NetWorker Environment Figre 5 Updated network As illstrated in the following figre, the NetWorker client is pgraded to a larger system to remove it as the bottleneck. With a better system and more network bandwidth, the bottleneck is now the target device. Tape devices often do not perform well as other components. Some factors that limit tape device performance are: Limited SCSI bandwidth Maximm tape drive performance reached Improve the target device performance by introdcing higher performance tape devices, sch as Fibre Channel based drives. Also, SAN environments can greatly improve performance. 44 EMC NetWorker 8.2 Performance Optimization Planning Gide

45 Size the NetWorker Environment Figre 6 Updated client As illstrated in the following figre, higher performance tape devices on a SAN remove them as the bottleneck. The bottleneck device is now the storage devices. Althogh the local volmes are performing at optimal speeds, they are nable to se the available system, network, and target device resorces. To improve the storage performance, move the data volmes to high performance external RAID arrays. Connectivity and bottlenecks 45

46 Size the NetWorker Environment Figre 7 Dedicated SAN Althogh the local volmes are performing at optimal speeds, they are nable to se the available system, network, and target device resorces. To improve the storage performance, move the data volmes to high performance external RAID arrays. As illstrated in the following figre, the external RAID arrays have improved the system performance. The RAID arrays perform nearly as well as the other components in the chain ensring that performance expectations are met. There will always be a bottleneck, however the impact of the bottleneck device is limited as all devices are performing at almost the same level as the other devices in the chain. 46 EMC NetWorker 8.2 Performance Optimization Planning Gide

47 Size the NetWorker Environment Figre 8 Raid array NOTICE This section does not sggest that all components mst be pgraded to improve performance, bt attempts to explain the concept of bottlenecks, and stresses the importance of having devices that perform at similar speeds as other devices in the chain. NetWorker database bottlenecks There are several factors that determine the size of NetWorker databases: NetWorker resorce database /nsr/res or networker install dir/res: The nmber of configred resorces. NetWorker jobs database (nsr/res/jobsdb): The nmber of jobs sch as backps, restores, clones mltiplied by nmber of days set for retention. This can exceed 100,000 records in the largest environments and is one of the primary performance bottlenecks. The overall size is never significant. For the NetWorker media database (nsr/mm): The nmber of save sets in retention and the nmber of labeled volmes. In the largest environments this can reach several Gigabytes of data. For the NetWorker client file index database (nsr/index): The nmber of files indexed and in the browse policy. This is normally the largest of the NetWorker databases. For storage sizing, se this formla: Index catalog size = {[(F+1)*N] + [(I+1) * (DFCR*N)]} * [(1+G)*C] where: F = 4 (Fll Browse Period set to 4 weeks) NetWorker database bottlenecks 47

48 Size the NetWorker Environment N = 1,000,000 (one million files for this example) I = 24 (A for week browse period for incremental backps - mins the fll backps) DFCR = 3% (Daily file change rate for standard ser file data) G = 25% (estimated annal growth rate %) C = 160 bytes (Constant nmber of bytes per file) For example: {[(4+1)*1,000,000] + [(24+1) * (3%*1,000,000)]} * [(1+.25)*160] {5,000,000 + [25 * 30,000)} * [1.25 * 160] 5,750,000 * 200 bytes = 1,150,000,000 bytes = 1150 MB NOTICE The index database can be split over mltiple locations, and the location is determined on a per client basis. The following figre illstrates the overall performance degradation when the disk performance on which NetWorker media database resides is a bottleneck. The chart on the right illstrates net data write throghpt (save set + index + bootstrap) and the chart on the left is save set write throghpt. Figre 9 NetWorker server write throghpt degradation 48 EMC NetWorker 8.2 Performance Optimization Planning Gide

49 CHAPTER 3 Tne Settings The NetWorker software has varios optimization featres that can be sed to tne the backp environment and to optimize backp and restore performance. Optimize NetWorker parallelism File system density...51 Disk optimization Device performance tning methods Network devices...53 Network optimization Tne Settings 49

50 Tne Settings Optimize NetWorker parallelism Follow the general best practices for server, grop, and client parallelism to ensre optimal performance. Server parallelism Client parallelism The server parallelism attribte controls how many save streams the server accepts simltaneosly. The more save streams the server can accept, the faster the devices and client disks rn. Client disks can rn at their performance limit or the limits of the connections between them. Server parallelism is not sed to control the startp of backp jobs, bt as a final limit of sessions accepted by a backp server. The server parallelism vale shold be as high as possible while not overloading the backp server itself. Proper client parallelism vales are important becase backp delays often occr when client parallelism is set too low for the NetWorker server. The best approach for client parallelism vales is: For reglar clients, se the lowest possible parallelism settings to best balance between the nmber of save sets and throghpt. For the backp server, set highest possible client parallelism to ensre that index backps are not delayed. This ensres that grops complete as they shold. The best approach to optimize NetWorker client performance is to eliminate client parallelism, redce it to 1, and increase the parallelism based on client hardware and data configration. It is critical that the NetWorker server has sfficient parallelism to ensre index backps do not impede grop completion. The client parallelism vales for the client that represents the NetWorker server are: Never set parallelism to 1 For small environments (nder 30 servers), set parallelism to at least 8 For medim environments ( servers), set parallelism to at least 12 For larger environments (100+ servers), set parallelism to at least 16 These recommendations assme that the backp server is a dedicated backp server. The backp server shold always be a dedicated server for optimm performance. Grop parallelism Proper grop parallelism vales ensre optimal operating system performance. The best approach for grop parallelism vales is: Create save grops with a maximm of 50 clients with grop parallelism enforced. Large save grops with more than 50 clients can reslt in many operating system processes starting at the same time casing temporary operating system resorce exhastion. 50 EMC NetWorker 8.2 Performance Optimization Planning Gide

51 Tne Settings Stagger save grop start times by a small amont to redce the load on the operating system. For example, it is best to have 4 save grops, each with 50 clients, starting at 5 minte intervals than to have 1 save grop with 200 clients. Mltiplexing The Target Sessions attribte sets the target nmber of simltaneos save streams that write to a device. This vale is not a limit, therefore a device might receive more sessions than the Target Sessions attribte specifies. The more sessions specified for Target Sessions, the more save sets that can be mltiplexed (or interleaved) onto the same volme. AFTD device target and max sessions on page 56 provides additional information on device Target Sessions. Performance tests and evalation can determine whether mltiplexing is appropriate for the system. Follow these gidelines when evalating the se of mltiplexing: Find the maximm rate of each device. Use the bigasm test described in The bigasm directive on page 69. Find the backp rate of each disk on the client. Use the asm test described in The asm directive on page 70. If the sm of the backp rates from all disks in a backp is greater than the maximm rate of the device, do not increase server parallelism. If more save grops are mltiplexed in this case, backp performance will not improve, and recovery performance might slow down. File system density File system density has a direct impact on backp throghpt. The NetWorker save operation spends significant time based on file system density specifically when there are a large nmber of small files. NetWorker performance for high density file systems depends on disk latency, file system type and nmber of files in the save set. The following figre illstrates the level of impact file system density has on backp throghpt. Figre 10 Files verss throghpt Mltiplexing 51

52 Tne Settings Disk optimization NetWorker release 8.0 introdces a new featre to optimize data read performance from the client dring standard file system backps. NetWorker 7.6 and earlier se fixed 64 KB blocks when reading files from a client, now NetWorker 8.0 ses an intelligent algorithm to choose an optimal block size vale in the range of 64 KB and 8 MB based on the crrent read performance of the client system. This block size selection occrs dring the actal data transfer and does not add any overhead to the backp process, and potentially significantly increases disk read performance. NOTICE Read block size is not related to device block size sed for backp, which remains nchanged. This featre is transparent to the rest of the backp process and does not reqire any additional configration. Yo can override the dynamic block size by setting the NSR_READ_SIZE environment variable to a desired vale. For example, NSR_READ_SIZE=65536 forces the NetWorker software to se 64 KB block size dring the read process. Device performance tning methods Inpt/otpt transfer rate Bilt-in compression Specific device-related areas can improve performance. The I/O rate is the rate at which data is written to a device. Depending on the device and media technology, device transfer rates can range from 500 KB per second to 200 MB per second. The defalt block size and bffer size of a device affect its transfer rate. If I/O limitations interfere with the performance of the NetWorker server, try pgrading the device to affect a better transfer rate. Trn on device compression to increase effective throghpt to the device. Some devices have a bilt-in hardware compression featre. Depending on how compressible the backp data is, this can improve effective data throghpt, from a ratio of 1.5:1 to 3:1. Drive streaming To obtain peak performance from most devices, stream the drive at its maximm sstained throghpt. Withot drive streaming, the drive mst stop to wait for its bffer to refill or to reposition the media before it can resme writing. This can case a delay in the cycle time of a drive, depending on the device. 52 EMC NetWorker 8.2 Performance Optimization Planning Gide

53 Tne Settings Device load balancing Fragmenting a Disk drive Network devices Balance data load for simltaneos sessions more evenly across available devices by adjsting target and max sessions per device. This parameter specifies the minimm nmber of save sessions to be established before the NetWorker server attempts to assign save sessions to another device. More information on device target and max sessions is available at AFTD device target and max sessions on page 56. A fragmented file system on Windows clients can case sbstantial performance degradation based on the amont of fragmentation. Defragment disks to avoid performance degradation. Procedre 1. Check the file system performance on the client by sing a copy or ftp operation withot NetWorker to determine if disk fragmentation might be the problem. 2. Rn the Disk Defragmenter tool on the client to consolidate data so the disk can perform more efficiently: a. Click to open Disk Defragmenter. b. Under Crrent stats, select the disk to defragment. c. Click Analyze disk to verify that fragmentation is a problem. If prompted for an administrator password or confirmation, type the password or provide confirmation. d. When Windows is finished analyzing the disk, check the percentage of fragmentation on the disk in the Last Rn colmn. If the nmber is above 10%, defragment the disk. e. Click Defragment disk. If prompted for an administrator password or confirmation, type the password or provide confirmation. The defragmentation might take from several mintes to a few hors to complete, depending on the size and degree of fragmentation of the hard disk. Yo can still se the compter dring the defragmentation process. Data that is backed p from remote clients, the roters, network cables, and network interface cards can affect the backp and recovery operations. This section lists the performance variables in network hardware, and sggests some basic tning for networks. The following items address specific network isses: Network I/O bandwidth: The maximm data transfer rate across a network rarely approaches the specification of the manfactrer becase of network protocol overhead. NOTICE The following statement concerning overall system sizing mst be considered when addressing network bandwidth. Each attached tape drive (physical VTL or AFTD) ses available I/O bandwidth, and also consmes CPU as data still reqires processing. Device load balancing 53

54 Tne Settings Network path: Networking components sch as roters, bridges, and hbs consme some overhead bandwidth, which degrades network throghpt performance. Network load: Do not attach a large nmber of high-speed NICs directly to the NetWorker server, as each IP address se significant amonts of CPU resorces. For example, a midsize system with for 1 GB NICs ses more than 50 percent of its resorces to process TCP data dring a backp. Other network traffic limits the bandwidth available to the NetWorker server and degrades backp performance. As the network load reaches a satration threshold, data packet collisions degrade performance even more. The nsrmmdbd ses high CPU intensive operation when thosands of savesets are processed in a single operation. Therefore, cloning operations with hge savesets and NetWorker maintenance activities shold rn otside of the primary backp window. Fibre Channel latency To redce the impact of link latency, increase the NetWorker volme block size. The reslt of increased volme block size is that data streams to devices withot a freqent need for rond-trip acknowledgement. For low-latency links, increased block size does not have any effect. For high-latency links, the impact can be significant and will not reach the same level of performance as local links. NOTICE High bandwidth does not directly increase performance if latency is the case of slow data. The following table is an example of different block sizes on a physical LTO-4 tape drive connected locally over a 15 KM 8 Gb DWDM link. Table 8 The effect of blocksize on an LTO-4 tape drive Blocksize Local backp performance Remote backp performance 64 KB 173 MB/second 60 MB/second 128 KB 173 MB/second 95 MB/second 256 KB 173 MB/second 125 MB/second 512 KB 173 MB/second 130 MB/second 1024 KB 173 MB/second 130 MB/second The following figre illstrates that the NetWorker backp throghpt drops from 100 percent to 0 percent when the delay is set from ms to 2.0 ms. 54 EMC NetWorker 8.2 Performance Optimization Planning Gide

55 Tne Settings Figre 11 Fibre Channel latency impact on data throghpt DataDomain Backp to DataDomain storage can be configred by sing mltiple technologies: NetWorker 8.1 and later spports DDBoost over Fibre Channel. This featre leverages the advantage of the boost protocol in a SAN infrastrctre. It provides the following benefits: DDBoost over Fibre Channel (DFC) backp with Client Direct is 20-25% faster when compared to backp with DD VTL. The next sbseqent fll backp is 3 times faster than the first fll backp. Recovery over DFC is 2.5 times faster than recovery sing DD VTL. Backp to VTL: NetWorker devices are configred as tape devices and data transfer occrs over Fibre Channel. Information on VTL optimization is available in Nmber of virtal device drives verss physical device drives on page 57. Backp to AFTD over CIFS or NFS: Overall network throghpt depends on the CIFS and NFS performance which depends on network configration. Network optimization on page 57 provides best practices on backp to AFTD over CIFS or NFS. Inefficiencies in the nderlying transport limits backp performance to 70-80% of the link speed. For optimal performance, NetWorker release 7.5 Service Pack 2 or later is reqired. The Client Direct attribte to enable direct file access (DFA) introdced in NetWorker 8.0: Client Direct to Data Domain (DD) sing Boost provides mch better performance than DFA-AFTD sing CIFS/NFS. Backp performance with client direct enabled (DFA-DD/DFA-AFTD) is 20-60% faster than traditional backp sing mmd. DataDomain 55

56 Tne Settings With an increasing nmber of streams to single device, DFA handles the backp streams mch better than mmd. Backp to DataDomain by sing a native device type: NetWorker 7.6 Service Pack 1 provides a new device type designed specifically for native commnication to Data Domain storage over TCP/IP links. With proper network optimization, this protocol is capable of sing p to 95 percent of the link speed even at 10 Gb/sec rates and is crrently the most efficient network transport. In NetWorker 7.6.1, each DataDomain device configred in NetWorker is limited to a maximm of 10 parallel backp streams. If higher parallelism is reqired, configre more devices to a limit defined by the NetWorker server edition. In NetWorker and later, limit nmber of sessions per device to 60. Despite the method sed for backp to DataDomain storage, the aggregate backp performance is limited by the maximm ingress rate of the specific DataDomain model. The minimm reqired memory for a NetWorker DataDomain-OST device with each device total streams set to 10 is approximately 160 MB. Each OST stream for BOOST takes an additional 16 MB of memory. DDBoost takes between 2% and 40% additional CPU time dring backp operations as compared to non-client dedplicated backps for a mch shorter period of time. However, the overall CPU load of a backp to DDBoost is less when compared to traditional mmd based backps sing CIFS/NFS. AFTD device target and max sessions NetWorker 7.6 and earlier software Each spported operating system has a specific optimal Advanced File Type Device (AFTD) device target, and max sessions settings for the NetWorker software. Details for NetWorker versions 7.6 and earlier, and 7.6 Service Pack 1 and later software are inclded. The crrent NetWorker 7.6 and earlier defalt settings for AFTD target sessions (4) and max sessions (512) are not optimal for AFTD performance. To optimize AFTD performance for NetWorker 7.6 and earlier, change the defalt vales: Set device target sessions from 4 to 1. Set device max sessions from 512 to 32 to avoid disk thrashing. NetWorker 7.6 Service Pack 1 and later The defalts for AFTD target sessions and max device sessions are now set to the optimal vales for AFTD performance: Device target sessions is 1. Device max sessions is 32 to avoid disk thrashing. If reqired, both Device target, and max session attribtes can be modified to reflect vales appropriate for the environment. 56 EMC NetWorker 8.2 Performance Optimization Planning Gide

57 Tne Settings NetWorker 8.0 and later software The dynamic nsrmmd attribte in the NSR storage node attribte is off by defalt for the dynamic provisioning of nsrmmd processes. Trning on the dynamic nsrmmd attribte enables dynamic nsrmmd provisioning. NOTICE The Dynamic nsrmmd featre for AFTD and DD Boost devices in NetWorker 8.1 is enabled by defalt. In previos NetWorker versions, this attribte was disabled by defalt. When the dynamic nsrmmd attribte is enabled and the nmber of sessions to a device exceeds the nmber of target sessions, the visible change in behavior is mltiple nsrmmd processes on the same device. This contines ntil the max nsrmmd cont, or max sessions vales are reached, whichever is lower. To trn on backp to disk, select the Configration tab to set these attribtes as reqired: Target Sessions is the nmber of sessions the device will handle before for another available device is sed. For best performance, this shold be set to a low vale. The defalt vales are 4 (FTD/AFTD) and 6 (DD Boost devices) and it may not be set to a vale greater than 60. Max Sessions has a defalt vales of 32 (FTD/AFTD) and 60 (DD Boost devices), which in most cases provides best performance. It cannot be set to a vale greater than 60. Max nsrmmd cont is an advanced setting that can be sed to increase data throghpt by restricting the nmber of backp processes that the storage node can simltaneosly rn. When the target or max sessions are changed, the max nsrmmd cont is atomatically adjsted according to the formla MS/TS + 4. The defalt vales are 12 (FTD/AFTD) and 14 (DD Boost devices). NOTICE It is not recommended to modify both session attribtes and max nsrmmd cont at the same time. If yo need to modify this vale, adjst the sessions attribtes first, apply, then pdate max nsrmmd cont. Nmber of virtal device drives verss physical device drives Network optimization The acceptable nmber of virtal devices stored on an LTO depends on the type of LTO and the nmber of planed physical devices. The following is based on the 70 percent tilization of a Fibre Channel port: For LTO-3: 3 virtal devices for every 2 physical devices planned. For LTO-4: 3 virtal devices for each physical device planned. The performance of each of these tape drives on the same port degrades with the nmber of attached devices. For example: If the first virtal drive reaches the 150 MB per second limit. The second virtal drive will not exceed 100 MB per second. The third virtal drive will not exceed 70 MB per second. Adjst the following components of the network to ensre optimal performance: Nmber of virtal device drives verss physical device drives 57

58 Tne Settings Advanced configration optimization Operating system TCP stack optimization The defalt TCP operating system parameters are tned for maximm compatibility with legacy network infrastrctres, bt not for maximm performance. Ths, some configration is necessary. Appendix B: Firewall Spport in the NetWorker Release 8.1 (or later) Administration Gide provides instrctions on advanced configration options sch as mltihomed systems, trnking, and so on. There are general and environmental capability rles to ensre operating system TCP stack optimization. The common rles for optimizing the operating system TCP stack for all se cases are listed here: Disable software flow control. Increase TCP bffer sizes. Increase TCP qee depth. Use PCIeXpress for 10 GB NICs. Other I/O architectres do not have enogh bandwidth. More information on PCIeXpress is available in PCI-X and PCIeXpress considerations on page 22. Rles that depend on environmental capabilities are listed here: Some operating systems have internal ato-tning of the TCP stack. This prodces good reslts in a non-heterogeneos environment. However, for heterogeneos, or roted environments disable TCP ato-tning. Enable jmbo frames when possible. Information on jmbo frames is available in Jmbo frames on page 60. NOTICE It is reqired that all network components in the data path are able to handle jmbo frames. Do not enable jmbo frames if this is not the case. TCP hardware offloading is beneficial if it works properly. However it can case CRC mis-matches. Be sre to monitor for errors if it is enabled. TCP windows scaling is beneficial if it is spported by all network eqipment in the chain. TCP congestion notification can case problems in heterogeneos environments. Only enable it in single operating system environments. Advanced tning IRQ processing for high-speed NICs is very expensive, bt can provide enhanced performance by selecting specific CPU cores. Specific recommendations depend on the CPU architectre. Expected NIC throghpt vales High speed NICs are significantly more efficient than common NICs. Common NIC throghpt vales are in the following ranges: 58 EMC NetWorker 8.2 Performance Optimization Planning Gide

59 Tne Settings 100 Mb link = 6 8 MB/s 1 Gb link = 45 65MB/s 10 Gb link = Mb/s With optimized vales, throghpt for high-speed links can be increased to the following: 100 Mb link = 12 MB/s 1 Gb link = 110MB/s 10 Gb link = 1100 MB/s The Theoretical maximm throghpt for a 10 Gb Ethernet link is GB/s per direction calclated by converting bits to bytes and removing the minimm Ethernet, IP and TCP overheads. Network latency Increased network TCP latency has a negative impact on overall throghpt, despite the amont of available link bandwidth. Longer distances or more hops between network hosts can reslt in lower overall throghpt. Network latency has a high impact on the efficiency of bandwidth se. For example, The following figres illstrate backp throghpt on the same network link, with varying latency. For these examples, non-optimized TCP settings were sed. Figre 12 Network latency on 10/100 MB per second Network latency 59

60 Tne Settings Figre 13 Network latency on 1 Gigabyte Ethernet dplexing Firewalls Jmbo frames Network links that perform in half-dplex mode case decreased NetWorker traffic flow performance. For example, a 100 Mb half-dplex link reslts in backp performance of less than 1 MB per second. The defalt configration setting on most operating systems for dplexing is ato negotiated as recommended by IEEE However, ato negotiation reqires that the following conditions are met: Proper cabling Compatible NIC adapter Compatible switch Ato negotiation can reslt in a link performing as half-dplex. To avoid isses with ato negotiation, force fll-dplex settings on the NIC. Forced flldplex setting mst be applied to both sides of the link. Forced fll-dplex on only one side of the link reslts in failed ato negotiation on the other side of the link. The additional layer on the I/O path in a hardware firewall increases network latency, and redces the overall bandwidth se. It is recommended to avoid sing software firewalls on the backp server as it processes a large nmber of packets reslting in significant overhead. Details on firewall configration and impact are available in Appendix B: Firewall Spport in the NetWorker Release 8.1 (or later), Administration Gide. It is recommended to se jmbo frames in environments capable of handling them. If both the sorce, the compters, and all eqipment in the data path are capable of handling jmbo frames, increase the MTU to 9 KB. These examples are for Linx and Solaris operating systems: 60 EMC NetWorker 8.2 Performance Optimization Planning Gide

EMC NetWorker. Performance Optimization Planning Guide. Version 8.2 SP1 302-001-576 REV 01

EMC NetWorker. Performance Optimization Planning Guide. Version 8.2 SP1 302-001-576 REV 01 EMC NetWorker Version 8.2 SP1 Performance Optimization Planning Guide 302-001-576 REV 01 Copyright 2000-2015 EMC Corporation. All rights reserved. Published in USA. Published January, 2015 EMC believes

More information

High Availability for Internet Information Server Using Double-Take 4.x

High Availability for Internet Information Server Using Double-Take 4.x High Availability for Internet Information Server Using Doble-Take 4.x High Availability for Internet Information Server Using Doble-Take 4.x pblished April 2000 NSI and Doble-Take are registered trademarks

More information

High Availability for Microsoft SQL Server Using Double-Take 4.x

High Availability for Microsoft SQL Server Using Double-Take 4.x High Availability for Microsoft SQL Server Using Doble-Take 4.x High Availability for Microsoft SQL Server Using Doble-Take 4.x pblished April 2000 NSI and Doble-Take are registered trademarks of Network

More information

Isilon OneFS. Version 7.1. Backup and recovery guide

Isilon OneFS. Version 7.1. Backup and recovery guide Isilon OneFS Version 7.1 Backp and recovery gide Copyright 2013-2014 EMC Corporation. All rights reserved. Pblished in USA. Pblished March, 2014 EMC believes the information in this pblication is accrate

More information

EMC Smarts SAM, IP, ESM, MPLS, VoIP, and NPM Managers

EMC Smarts SAM, IP, ESM, MPLS, VoIP, and NPM Managers EMC Smarts SAM, IP, ESM, MPLS, VoIP, and NPM Managers Version 9.2.2 Spport Matrix 302-000-357 REV 02 Copyright 2013 EMC Corporation. All rights reserved. Pblished in USA. Pblished December, 2013 EMC believes

More information

Technical Notes. PostgreSQL backups with NetWorker. Release number 1.0 302-001-174 REV 01. June 30, 2014. u Audience... 2. u Requirements...

Technical Notes. PostgreSQL backups with NetWorker. Release number 1.0 302-001-174 REV 01. June 30, 2014. u Audience... 2. u Requirements... PostgreSQL backps with NetWorker Release nmber 1.0 302-001-174 REV 01 Jne 30, 2014 Adience... 2 Reqirements... 2 Terminology... 2 PostgreSQL backp methodologies...2 PostgreSQL dmp backp... 3 Configring

More information

EMC ViPR Analytics Pack for VMware vcenter Operations Management Suite

EMC ViPR Analytics Pack for VMware vcenter Operations Management Suite EMC ViPR Analytics Pack for VMware vcenter Operations Management Site Version 1.1.0 Installation and Configration Gide 302-000-487 01 Copyright 2013-2014 EMC Corporation. All rights reserved. Pblished

More information

EMC VNX Series Setting Up a Unisphere Management Station

EMC VNX Series Setting Up a Unisphere Management Station EMC VNX Series Setting Up a Unisphere Management Station P/N 300-015-123 REV. 02 April, 2014 This docment describes the different types of Unisphere management stations and tells how to install and configre

More information

EMC PowerPath Virtual Appliance

EMC PowerPath Virtual Appliance EMC PowerPath Virtal Appliance Version 1.2 Administration Gide P/N 302-000-475 REV 01 Copyright 2013 EMC Corporation. All rights reserved. Pblished in USA. Pblished October, 2013 EMC believes the information

More information

Deploying Network Load Balancing

Deploying Network Load Balancing C H A P T E R 9 Deploying Network Load Balancing After completing the design for the applications and services in yor Network Load Balancing clster, yo are ready to deploy the clster rnning the Microsoft

More information

aééäçóáåö=táåççïë= péêîéê=ommp=oéöáçå~ä= açã~áåë

aééäçóáåö=táåççïë= péêîéê=ommp=oéöáçå~ä= açã~áåë C H A P T E R 7 aééäçóáåö=táåççïë= péêîéê=ommp=oéöáçå~ä= açã~áåë Deploying Microsoft Windows Server 2003 s involves creating new geographically based child domains nder the forest root domain. Deploying

More information

Enabling Advanced Windows Server 2003 Active Directory Features

Enabling Advanced Windows Server 2003 Active Directory Features C H A P T E R 5 Enabling Advanced Windows Server 2003 Active Directory Featres The Microsoft Windows Server 2003 Active Directory directory service enables yo to introdce advanced featres into yor environment

More information

Planning an Active Directory Deployment Project

Planning an Active Directory Deployment Project C H A P T E R 1 Planning an Active Directory Deployment Project When yo deploy the Microsoft Windows Server 2003 Active Directory directory service in yor environment, yo can take advantage of the centralized,

More information

EMC Storage Analytics

EMC Storage Analytics EMC Storage Analytics Version 2.1 Installation and User Gide 300-014-858 09 Copyright 2013 EMC Corporation. All rights reserved. Pblished in USA. Pblished December, 2013 EMC believes the information in

More information

EMC VNX Series. EMC Secure Remote Support for VNX. Version VNX1, VNX2 300-014-340 REV 03

EMC VNX Series. EMC Secure Remote Support for VNX. Version VNX1, VNX2 300-014-340 REV 03 EMC VNX Series Version VNX1, VNX2 EMC Secre Remote Spport for VNX 300-014-340 REV 03 Copyright 2012-2014 EMC Corporation. All rights reserved. Pblished in USA. Pblished Jly, 2014 EMC believes the information

More information

Designing and Deploying File Servers

Designing and Deploying File Servers C H A P T E R 2 Designing and Deploying File Servers File servers rnning the Microsoft Windows Server 2003 operating system are ideal for providing access to files for sers in medim and large organizations.

More information

Pgrading To Windows XP 4.0 Domain Controllers and Services

Pgrading To Windows XP 4.0 Domain Controllers and Services C H A P T E R 8 Upgrading Windows NT 4.0 Domains to Windows Server 2003 Active Directory Upgrading yor domains from Microsoft Windows NT 4.0 to Windows Server 2003 Active Directory directory service enables

More information

Planning a Managed Environment

Planning a Managed Environment C H A P T E R 1 Planning a Managed Environment Many organizations are moving towards a highly managed compting environment based on a configration management infrastrctre that is designed to redce the

More information

EMC NetWorker. Performance Optimization Planning Guide. Version 9.0.x 302-001-774 REV 04

EMC NetWorker. Performance Optimization Planning Guide. Version 9.0.x 302-001-774 REV 04 EMC NetWorker Version 9.0.x Performance Optimization Planning Guide 302-001-774 REV 04 Copyright 2000-2016 EMC Corporation. All rights reserved. Published in the USA. Published June 2016 EMC believes the

More information

EMC Data Domain Operating System

EMC Data Domain Operating System EMC Data Domain Operating System Version 5.4 Administration Gide 302-000-072 REV. 06 Copyright 2009-2014 EMC Corporation. All rights reserved. Pblished in USA. Pblished September, 2014 EMC believes the

More information

Planning a Smart Card Deployment

Planning a Smart Card Deployment C H A P T E R 1 7 Planning a Smart Card Deployment Smart card spport in Microsoft Windows Server 2003 enables yo to enhance the secrity of many critical fnctions, inclding client athentication, interactive

More information

Designing an Authentication Strategy

Designing an Authentication Strategy C H A P T E R 1 4 Designing an Athentication Strategy Most organizations need to spport seamless access to the network for mltiple types of sers, sch as workers in offices, employees who are traveling,

More information

EMC ViPR. Concepts Guide. Version 1.1.0 302-000-482 02

EMC ViPR. Concepts Guide. Version 1.1.0 302-000-482 02 EMC ViPR Version 1.1.0 Concepts Gide 302-000-482 02 Copyright 2013-2014 EMC Corporation. All rights reserved. Pblished in USA. Pblished Febrary, 2014 EMC believes the information in this pblication is

More information

Upgrading Windows 2000 Domains to Windows Server 2003 Domains

Upgrading Windows 2000 Domains to Windows Server 2003 Domains C H A P T E R 9 Upgrading Windows 2000 Domains to Windows Server 2003 Domains Upgrading yor network operating system from Microsoft Windows 2000 to Windows Server 2003 reqires minimal network configration

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.

More information

EMC PowerPath/VE Installation and Administration Guide

EMC PowerPath/VE Installation and Administration Guide EMC PowerPath/VE Installation and Administration Gide Version 5.9 and Minor Releases for VMware vsphere P/N 302-000-236 REV 03 Copyright 2009-2014. All rights reserved. Pblished in USA. EMC believes the

More information

Designing a TCP/IP Network

Designing a TCP/IP Network C H A P T E R 1 Designing a TCP/IP Network The TCP/IP protocol site defines indstry standard networking protocols for data networks, inclding the Internet. Determining the best design and implementation

More information

MVM-BVRM Video Recording Manager v2.22

MVM-BVRM Video Recording Manager v2.22 Video MVM-BVRM Video Recording Manager v2.22 MVM-BVRM Video Recording Manager v2.22 www.boschsecrity.com Distribted storage and configrable load balancing iscsi disk array failover for extra reliability

More information

Chapter 1. LAN Design

Chapter 1. LAN Design Chapter 1 LAN Design CCNA3-1 Chapter 1 Note for Instrctors These presentations are the reslt of a collaboration among the instrctors at St. Clair College in Windsor, Ontario. Thanks mst go ot to Rick Graziani

More information

VRM Video Recording Manager v3.0

VRM Video Recording Manager v3.0 Video VRM Video Recording Manager v3.0 VRM Video Recording Manager v3.0 www.boschsecrity.com Distribted storage and configrable load balancing iscsi disk array failover for extra reliability Used with

More information

Isilon OneFS. Version 7.1. Web Administration Guide

Isilon OneFS. Version 7.1. Web Administration Guide Isilon OneFS Version 7.1 Web Administration Gide Copyright 2001-2014 EMC Corporation. All rights reserved. Pblished in USA. Pblished March, 2014 EMC believes the information in this pblication is accrate

More information

Introduction to HBase Schema Design

Introduction to HBase Schema Design Introdction to HBase Schema Design Amandeep Khrana Amandeep Khrana is a Soltions Architect at Clodera and works on bilding soltions sing the Hadoop stack. He is also a co-athor of HBase in Action. Prior

More information

GUIDELINE. Guideline for the Selection of Engineering Services

GUIDELINE. Guideline for the Selection of Engineering Services GUIDELINE Gideline for the Selection of Engineering Services 1998 Mission Statement: To govern the engineering profession while enhancing engineering practice and enhancing engineering cltre Pblished by

More information

Planning and Implementing An Optimized Private Cloud

Planning and Implementing An Optimized Private Cloud W H I T E PA P E R Intelligent HPC Management Planning and Implementing An Optimized Private Clod Creating a Clod Environment That Maximizes Yor ROI Planning and Implementing An Optimized Private Clod

More information

EMC Storage Resource Management Suite

EMC Storage Resource Management Suite EMC Storage Resorce Management Site Version 3.0.2.0 Installation and Configration Gide PN 302-000-859 REV 02 Copyright 2013-2014 EMC Corporation. All rights reserved. Pblished in USA. Pblished April, 2014

More information

VRM Video Recording Manager

VRM Video Recording Manager Video VRM Video Recording Manager VRM Video Recording Manager www.boschsecrity.com Distribted storage and configrable load balancing iscsi disk array failover for extra reliability Used with all Bosch

More information

HSBC Internet Banking. Combined Product Disclosure Statement and Supplementary Product Disclosure Statement

HSBC Internet Banking. Combined Product Disclosure Statement and Supplementary Product Disclosure Statement HSBC Internet Banking Combined Prodct Disclosre Statement and Spplementary Prodct Disclosre Statement AN IMPORTANT MESSAGE FOR HSBC CUSTOMERS NOTICE OF CHANGE For HSBC Internet Banking Combined Prodct

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Version 8.2 Service Pack 1 User Guide 302-001-235 REV 01 Copyright 2007-2015 EMC Corporation. All rights reserved. Published

More information

Galvin s All Things Enterprise

Galvin s All Things Enterprise Galvin s All Things Enterprise The State of the Clod, Part 2 PETER BAER GALVIN Peter Baer Galvin is the CTO for Corporate Technologies, a premier systems integrator and VAR (www.cptech. com). Before that,

More information

The bintec HotSpot Solution. Convenient internet access anywhere

The bintec HotSpot Solution. Convenient internet access anywhere The bintec HotSpot Soltion Convenient internet access anywhere Convenient internet access for all kinds of spaces Today s internet sers are freqently on the go. They expect to have internet access on their

More information

NAPA TRAINING PROGRAMS FOR:

NAPA TRAINING PROGRAMS FOR: NAPA TRAINING PROGRAMS FOR: Employees Otside Sales Store Managers Store Owners See NEW ecatalog Inside O V E R V I E W 2010_StoreTrainingBrochre_SinglePg.indd 1 5/25/10 12:39:32 PM Welcome 2010 Store Training

More information

CRM Customer Relationship Management. Customer Relationship Management

CRM Customer Relationship Management. Customer Relationship Management CRM Cstomer Relationship Management Farley Beaton Virginia Department of Taxation Discssion Areas TAX/AMS Partnership Project Backgrond Cstomer Relationship Management Secre Messaging Lessons Learned 2

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Version 9.0 User Guide 302-001-755 REV 01 Copyright 2007-2015 EMC Corporation. All rights reserved. Published in USA. Published

More information

Kentucky Deferred Compensation (KDC) Program Summary

Kentucky Deferred Compensation (KDC) Program Summary Kentcky Deferred Compensation (KDC) Program Smmary Smmary and Highlights of the Kentcky Deferred Compensation (KDC) Program Simple. Smart. For yo. For life. 457 Plan 401(k) Plan Roth 401(k) Deemed Roth

More information

10 Evaluating the Help Desk

10 Evaluating the Help Desk 10 Evalating the Help Desk The tre measre of any society is not what it knows bt what it does with what it knows. Warren Bennis Key Findings Help desk metrics having to do with demand and with problem

More information

The Intelligent Choice for Disability Income Protection

The Intelligent Choice for Disability Income Protection The Intelligent Choice for Disability Income Protection provider Pls Keeping Income strong We prposeflly engineer or disability income prodct with featres that deliver benefits sooner and contine paying

More information

CRM Customer Relationship Management. Customer Relationship Management

CRM Customer Relationship Management. Customer Relationship Management CRM Cstomer Relationship Management Kenneth W. Thorson Tax Commissioner Virginia Department of Taxation Discssion Areas TAX/AMS Partnership Project Backgrond Cstomer Relationship Management Secre Messaging

More information

Make the College Connection

Make the College Connection Make the College Connection A college planning gide for stdents and their parents Table of contents The compelling case for college 2 Selecting a college 3 Paying for college 5 Tips for meeting college

More information

Purposefully Engineered High-Performing Income Protection

Purposefully Engineered High-Performing Income Protection The Intelligent Choice for Disability Income Insrance Prposeflly Engineered High-Performing Income Protection Keeping Income strong We engineer or disability income prodcts with featres that deliver benefits

More information

STI Has All The Pieces Hardware Software Support

STI Has All The Pieces Hardware Software Support STI Has All The Pieces Hardware Software Spport STI has everything yo need for sccessfl practice management, now and in the ftre. The ChartMaker Medical Site Incldes: Practice Management/Electronic Billing,

More information

Introducing Revenue Cycle Optimization! STI Provides More Options Than Any Other Software Vendor. ChartMaker Clinical 3.7

Introducing Revenue Cycle Optimization! STI Provides More Options Than Any Other Software Vendor. ChartMaker Clinical 3.7 Introdcing Revene Cycle Optimization! STI Provides More Options Than Any Other Software Vendor ChartMaker Clinical 3.7 2011 Amblatory EHR + Cardiovasclar Medicine + Child Health STI Provides More Choices

More information

7 Help Desk Tools. Key Findings. The Automated Help Desk

7 Help Desk Tools. Key Findings. The Automated Help Desk 7 Help Desk Tools Or Age of Anxiety is, in great part, the reslt of trying to do today s jobs with yesterday s tools. Marshall McLhan Key Findings Help desk atomation featres are common and are sally part

More information

Closer Look at ACOs. Designing Consumer-Friendly Beneficiary Assignment and Notification Processes for Accountable Care Organizations

Closer Look at ACOs. Designing Consumer-Friendly Beneficiary Assignment and Notification Processes for Accountable Care Organizations Closer Look at ACOs A series of briefs designed to help advocates nderstand the basics of Accontable Care Organizations (ACOs) and their potential for improving patient care. From Families USA Janary 2012

More information

EMC NetWorker. Server Disaster Recovery and Availability Best Practices Guide. Release 8.0 Service Pack 1 P/N 300-999-723 REV 01

EMC NetWorker. Server Disaster Recovery and Availability Best Practices Guide. Release 8.0 Service Pack 1 P/N 300-999-723 REV 01 EMC NetWorker Release 8.0 Service Pack 1 Server Disaster Recovery and Availability Best Practices Guide P/N 300-999-723 REV 01 Copyright 1990-2012 EMC Corporation. All rights reserved. Published in the

More information

Corporate performance: What do investors want to know? Innovate your way to clearer financial reporting

Corporate performance: What do investors want to know? Innovate your way to clearer financial reporting www.pwc.com Corporate performance: What do investors want to know? Innovate yor way to clearer financial reporting October 2014 PwC I Innovate yor way to clearer financial reporting t 1 Contents Introdction

More information

EMC NetWorker Module for Microsoft Applications Release 2.3. Application Guide P/N 300-011-105 REV A02

EMC NetWorker Module for Microsoft Applications Release 2.3. Application Guide P/N 300-011-105 REV A02 EMC NetWorker Module for Microsoft Applications Release 2.3 Application Guide P/N 300-011-105 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

Firewall Feature Overview

Firewall Feature Overview PALO ALTO NETWORKS: Firewall Featre Overview Firewall Featre Overview Palo Alto Networks family of next generation firewalls delivers nprecedented visibility and control of applications, sers and content

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 8.2 User Guide P/N 302-000-658 REV 01 Copyright 2007-2014 EMC Corporation. All rights reserved. Published in the USA.

More information

A guide to safety recalls in the used vehicle industry GUIDE

A guide to safety recalls in the used vehicle industry GUIDE A gide to safety recalls in the sed vehicle indstry GUIDE Definitions Aftermarket parts means any prodct manfactred to be fitted to a vehicle after it has left the vehicle manfactrer s prodction line.

More information

How To Use A Microsoft Networker Module For Windows 8.2.2 (Windows) And Windows 8 (Windows 8) (Windows 7) (For Windows) (Powerbook) (Msa) (Program) (Network

How To Use A Microsoft Networker Module For Windows 8.2.2 (Windows) And Windows 8 (Windows 8) (Windows 7) (For Windows) (Powerbook) (Msa) (Program) (Network EMC NetWorker Module for Microsoft Applications Release 2.3 Application Guide P/N 300-011-105 REV A03 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

EMC Data Protection Search

EMC Data Protection Search EMC Data Protection Search Version 1.0 Security Configuration Guide 302-001-611 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published April 20, 2015 EMC believes

More information

The Intelligent Choice for Basic Disability Income Protection

The Intelligent Choice for Basic Disability Income Protection The Intelligent Choice for Basic Disability Income Protection provider Pls Limited Keeping Income strong We prposeflly engineer or basic disability income prodct to provide benefit-rich featres delivering

More information

Successful Conference

Successful Conference The Keynote Gide to Planning a Sccessfl Conference Dr Cathy Key A Keynote Networks Workbook Contents Introdction...2 The Role of the Conference Organiser...3 Establishing a Committee...4 Creating a Bdget...5

More information

BIS - Overview and basic package V2.5

BIS - Overview and basic package V2.5 Engineered Soltions BIS - Overview and basic package V2.5 BIS - Overview and basic package V2.5 www.boschsecrity.com Complete enterprise management for efficient, integrated bilding and secrity management

More information

Apache Hadoop. The Scalability Update. Source of Innovation

Apache Hadoop. The Scalability Update. Source of Innovation FILE SYSTEMS Apache Hadoop The Scalability Update KONSTANTIN V. SHVACHKO Konstantin V. Shvachko is a veteran Hadoop developer. He is a principal Hadoop architect at ebay. Konstantin specializes in efficient

More information

Phone Banking Terms Corporate Accounts

Phone Banking Terms Corporate Accounts Phone Banking Terms Corporate Acconts If there is any inconsistency between the terms and conditions applying to an Accont and these Phone Banking Terms, these Phone Banking Terms prevail in respect of

More information

Using GPU to Compute Options and Derivatives

Using GPU to Compute Options and Derivatives Introdction Algorithmic Trading has created an increasing demand for high performance compting soltions within financial organizations. The actors of portfolio management and ris assessment have the obligation

More information

EMC NetWorker Module for Microsoft Exchange Server Release 5.1

EMC NetWorker Module for Microsoft Exchange Server Release 5.1 EMC NetWorker Module for Microsoft Exchange Server Release 5.1 Installation Guide P/N 300-004-750 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

Opening the Door to Your New Home

Opening the Door to Your New Home Opening the Door to Yor New Home A Gide to Bying and Financing. Contents Navigating Yor Way to Home Ownership...1 Getting Started...3 Finding Yor Home...9 Finalizing Yor Financing...12 Final Closing...13

More information

Contents Welcome to FOXTEL iq2...5 For your safety...6 Getting Started...7 Playlist... 51 Active...53 Setup...54 FOXTEL Guide...18 ON DEMAND...

Contents Welcome to FOXTEL iq2...5 For your safety...6 Getting Started...7 Playlist... 51 Active...53 Setup...54 FOXTEL Guide...18 ON DEMAND... Contents Welcome to FOXTEL iq2...5 The FOXTEL iq2...5 Updates to FOXTEL iq2...5 Getting in toch with FOXTEL...5 For yor safety...6 Getting Started...7 Switching the FOXTEL iq2 on and off...7 Changing channel...7

More information

EMC NetWorker VSS Client for Microsoft Windows Server 2003 First Edition

EMC NetWorker VSS Client for Microsoft Windows Server 2003 First Edition EMC NetWorker VSS Client for Microsoft Windows Server 2003 First Edition Installation Guide P/N 300-003-994 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

EMC Data Domain Management Center

EMC Data Domain Management Center EMC Data Domain Management Center Version 1.1 Initial Configuration Guide 302-000-071 REV 04 Copyright 2012-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes

More information

Social Work Bursary: Academic year 2015/16 Application notes for students on undergraduate courses

Social Work Bursary: Academic year 2015/16 Application notes for students on undergraduate courses Social Work Brsary: Academic year 2015/16 Application notes for stdents on ndergradate corses These notes are for ndergradate stdents who have previosly received a brsary. Please make sre yo complete the

More information

DSA E-Series iscsi Disk Arrays

DSA E-Series iscsi Disk Arrays Video DSA E-Series Disk Arrays DSA E-Series Disk Arrays www.boschsecrity.com Scale-ot network storage soltion: controller nit with 12 internal HDDs with p to 96 HDDs via stateof-the-art SAS interface-connected

More information

Preparing your heavy vehicle for brake test

Preparing your heavy vehicle for brake test GUIDE Preparing yor heavy vehicle for brake test A best practice gide Saving lives, safer roads, ctting crime, protecting the environment Breaking the braking myth Some people believe that a locked wheel

More information

Standard. 8029HEPTA DataCenter. Because every fraction of a second counts. network synchronization requiring minimum space. hopf Elektronik GmbH

Standard. 8029HEPTA DataCenter. Because every fraction of a second counts. network synchronization requiring minimum space. hopf Elektronik GmbH 8029HEPTA DataCenter Standard Becase every fraction of a second conts network synchronization reqiring minimm space hopf Elektronik GmbH Nottebohmstraße 41 58511 Lüdenscheid Germany Phone: +49 (0)2351

More information

Appraisal Firewall 1.0. Appraisal Revolution. powered by Appraisal Firewall DATA FACTS WHITE PAPER SERIES

Appraisal Firewall 1.0. Appraisal Revolution. powered by Appraisal Firewall DATA FACTS WHITE PAPER SERIES Appraisal Firewall 1.0 Appraisal Revoltion powered by Appraisal Firewall DATA FACTS WHITE PAPER SERIES The Technology Standard Appraisal Revoltion, powered by Appraisal Firewall technology maximizes yor

More information

BIS - Overview and basic package V4.0

BIS - Overview and basic package V4.0 Engineered Soltions BIS - Overview and basic package V4.0 BIS - Overview and basic package V4.0 www.boschsecrity.com Complete enterprise management for efficient, integrated bilding and secrity management

More information

FINANCIAL FITNESS SELECTING A CREDIT CARD. Fact Sheet

FINANCIAL FITNESS SELECTING A CREDIT CARD. Fact Sheet FINANCIAL FITNESS Fact Sheet Janary 1998 FL/FF-02 SELECTING A CREDIT CARD Liz Gorham, Ph.D., AFC Assistant Professor and Family Resorce Management Specialist, Utah State University Marsha A. Goetting,

More information

EMC NetWorker Module for Microsoft for Exchange Server VSS

EMC NetWorker Module for Microsoft for Exchange Server VSS EMC NetWorker Module for Microsoft for Exchange Server VSS Version 9.0 User Guide 302-001-753 REV 02 Copyright 2007-2015 EMC Corporation. All rights reserved. Published in USA. Published October, 2015

More information

iet ITSM: Comprehensive Solution for Continual Service Improvement

iet ITSM: Comprehensive Solution for Continual Service Improvement D ATA S H E E T iet ITSM: I T I L V 3 I n n o v at i v e U s e o f B e s t P ra c t i c e s ITIL v3 is the crrent version of the IT Infrastrctre Library. The focs of ITIL v3 is on the alignment of IT Services

More information

ASAND: Asynchronous Slot Assignment and Neighbor Discovery Protocol for Wireless Networks

ASAND: Asynchronous Slot Assignment and Neighbor Discovery Protocol for Wireless Networks ASAND: Asynchronos Slot Assignment and Neighbor Discovery Protocol for Wireless Networks Fikret Sivrikaya, Costas Bsch, Malik Magdon-Ismail, Bülent Yener Compter Science Department, Rensselaer Polytechnic

More information

EMC NetWorker. Server Disaster Recovery and Availability Best Practices Guide. Version 8.2 SP1 302-001-572 REV 01

EMC NetWorker. Server Disaster Recovery and Availability Best Practices Guide. Version 8.2 SP1 302-001-572 REV 01 EMC NetWorker Version 8.2 SP1 Server Disaster Recovery and Availability Best Practices Guide 302-001-572 REV 01 Copyright 1990-2015 EMC Corporation. All rights reserved. Published in USA. Published January,

More information

Bosch Video Management System Software v3

Bosch Video Management System Software v3 Video Bosch Video Management System Software v3 Bosch Video Management System Software v3 www.boschsecrity.com Enterprise-class Client/Server based video management system System-wide ser management, alarm

More information

EMC NetWorker Data Domain Deduplication Devices

EMC NetWorker Data Domain Deduplication Devices EMC NetWorker Data Domain Deduplication Devices Release 8.0 Integration Guide P/N 300-013-562 REV A02 Copyright 2010-2012 EMC Corporation. All rights reserved. Published in the USA. Published July 2012

More information

The Time is Now for Stronger EHR Interoperability and Usage in Healthcare

The Time is Now for Stronger EHR Interoperability and Usage in Healthcare The Time is Now for Stronger EHR Interoperability and Usage in Healthcare Sponsored by Table of Contents 03 Stdy: Large Nmber of EHRs Do Not Meet Usability Standards 05 Black Book: EHR Satisfaction Growing

More information

EMC NetWorker Module for Microsoft for Exchange Server VSS

EMC NetWorker Module for Microsoft for Exchange Server VSS EMC NetWorker Module for Microsoft for Exchange Server VSS Version 8.2 Service Pack 1 User Guide 302-001-233 REV 01 Copyright 2007-2015 EMC Corporation. All rights reserved. Published in USA. Published

More information

Introducing ChartMaker Cloud! STI Provides More Options Than Any Other Software Vendor

Introducing ChartMaker Cloud! STI Provides More Options Than Any Other Software Vendor Introdcing ChartMaker Clod! STI Provides More Options Than Any Other Software Vendor ChartMaker Clinical 3.7 2011 Amblatory EHR + Cardiovasclar Medicine + Child Health The ChartMaker Medical Site is made

More information

EMC NetWorker. EMC Data Domain Boost Integration Guide. Version 9.0 302-001-769 REV 05

EMC NetWorker. EMC Data Domain Boost Integration Guide. Version 9.0 302-001-769 REV 05 EMC NetWorker Version 9.0 EMC Data Domain Boost Integration Guide 302-001-769 REV 05 Copyright 2001-2016 EMC Corporation. All rights reserved. Published in the USA. Published April, 2016 EMC believes the

More information

Owning A business Step-By-Step Guide to Financial Success

Owning A business Step-By-Step Guide to Financial Success Owning A bsiness Step-By-Step Gide to Financial Sccess CONTACT US For more information abot any of the services in this brochre, call 1-888-845-1850, visit or website at bsiness.mac.com or stop by the

More information

Welcome to UnitedHealthcare. Ideally, better health coverage should cost less. In reality, now it can.

Welcome to UnitedHealthcare. Ideally, better health coverage should cost less. In reality, now it can. Welcome to UnitedHealthcare Ideally, better health coverage shold cost less. In reality, now it can. The plan designed with both qality and affordability in mind. Consistent, qality care is vitally important.

More information

9 Setting a Course: Goals for the Help Desk

9 Setting a Course: Goals for the Help Desk IT Help Desk in Higher Edcation ECAR Research Stdy 8, 2007 9 Setting a Corse: Goals for the Help Desk First say to yorself what yo wold be; and then do what yo have to do. Epictets Key Findings Majorities

More information

Effective governance to support medical revalidation

Effective governance to support medical revalidation Effective governance to spport medical revalidation A handbook for boards and governing bodies This docment sets ot a view of the core elements of effective local governance of the systems that spport

More information

Facilities. Car Parking and Permit Allocation Policy

Facilities. Car Parking and Permit Allocation Policy Facilities Car Parking and Permit Allocation Policy Facilities Car Parking and Permit Allocation Policy Contents Page 1 Introdction....................................................2 2.0 Application

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

Closer Look at ACOs. Making the Most of Accountable Care Organizations (ACOs): What Advocates Need to Know

Closer Look at ACOs. Making the Most of Accountable Care Organizations (ACOs): What Advocates Need to Know Closer Look at ACOs A series of briefs designed to help advocates nderstand the basics of Accontable Care Organizations (ACOs) and their potential for improving patient care. From Families USA Updated

More information

personal income insurance product disclosure statement and policy Preparation date: 26/03/2004

personal income insurance product disclosure statement and policy Preparation date: 26/03/2004 personal income insrance prodct disclosre statement and policy Preparation date: 26/03/2004 personal income Insrer CGU Insrance Limited ABN 27 004 478 371 AFS Licence No. 238291 This is an important docment.

More information

5 Using Your Verbatim Autodialer

5 Using Your Verbatim Autodialer 5 Using Yor Verbatim Atodialer 5.1 Placing Inqiry Calls to the Verbatim Atodialer ( Yo may call the Verbatim atodialer at any time from any phone. The nit will wait the programmed nmber of rings before

More information

5 High-Impact Use Cases of Big Data Analytics for Optimizing Field Service Processes

5 High-Impact Use Cases of Big Data Analytics for Optimizing Field Service Processes 5 High-Impact Use Cases of Big Analytics for Optimizing Field Service Processes Improving Field Service Efficiency and Maximizing Eqipment Uptime with Big Analytics and Machine Learning Field Service Profitability

More information