HP agile migration of Oracle databases across HP 3PAR StoreServ Storage systems

Size: px
Start display at page:

Download "HP agile migration of Oracle databases across HP 3PAR StoreServ Storage systems"

Transcription

1 Technical white paper HP agile migration of Oracle databases across HP 3PAR StoreServ Storage systems HP best practices and recommendations for nondisruptive Oracle database migration using HP 3PAR Peer Motion Software Table of contents Executive summary... 2 Terminology used... 2 Concepts... 2 Features... 3 Technical prerequisites... 4 Use cases in Oracle RAC and single instance database environment... 5 Zero downtime technology refresh... 5 GUI-assisted Oracle database migration... 5 Appliance-less federated data migration... 5 Federated array workload management... 5 Effectiveness of HP 3PAR Peer Motion in Oracle database environments... 5 HP 3PAR Peer Motion configuration... 5 Online non-disruptive migration of Oracle Database 11gR2 RAC instances in OLTP workload environment Online non-disruptive migration of Oracle Database 11gR2 RAC instances in DSS workload environment Online non-disruptive migration of Oracle Database 10gR2 single instances in OLTP workload environment Best practices Licensing Troubleshooting For more information... 34

2 Executive summary Storage systems have achieved new levels of reliability with the implementation of internal and external redundancy features. HP 3PAR StoreServ Storage is the next generation of federated Tier 1 storage and was built from the ground up to exceed the economic and operational requirements in Oracle database environments by providing the SAN performance, scalability, ultra-high availability (see HP 3PAR StoreServ Storage designed for mission-critical high availability for more information) and simplified management that clients need. It does this through an innovative system architecture that offers storage federation, secure multi-tenancy, built-in thin processing capabilities and autonomic management and storage tiering features that are unique in the industry. However, while migrating Oracle RAC and single instance databases in an online transaction processing (OLTP) or decision support system (DSS) workload environment from legacy HP 3PAR StoreServ Storage to a next-generation one, interrupted access to the database is still considered the standard behavior. To overcome this shortcoming, HP created HP 3PAR Peer Motion Software, the first nondisruptive, do-it-yourself (DIY) data migration software tool that brings Oracle database mobility for enterprise block-storage across the storage systems. Online nondisruptive migration of Oracle 11gR2 RAC and single instance databases using HP 3PAR Peer Motion doesn t require the shutdown of the Oracle database host(s). All stages of an HP 3PAR Peer Motion data migration lifecycle are orchestrated through a convenient software wizard to provide simple and fool-proof Oracle database migration. Also, the entire Oracle RAC database migration phase occurs online and without downtime for selected host operating systems 1. HP 3PAR Peer Motion software acts as an enabler for HP 3PAR StoreServ customers to load-balance Oracle database I/O requests in the OLTP and DSS workload environment across storage systems, at will at any time, and perform a technology refresh seamlessly at a low CAPEX and OPEX cost expenditure for cost-optimized asset lifecycle management. Peer Motion on HP 3PAR StoreServ Storage is part of the comprehensive HP Storage Federation strategy. With federated storage, you configure your storage systems in a true peer-based relationship. This allows customers to handle unpredictable workloads in 24x7 multi-tenant environments while being able to reduce expenses and manage overhead and risk to service levels, by improving query response times and service levels for online transaction processing (OLTP) workloads, which essentially demand the highest performance storage systems. HP 3PAR Peer Motion raises agility and efficiency levels in your data center across the boundaries of a single storage system. This paper focuses on the operational side of migrating Oracle RAC and single instance databases in OLTP and DSS workload environments using HP 3PAR Peer Motion software, and uses HP 3PAR Management Console version 4.3 for HP 3PAR StoreServ storage systems. It describes the concepts, technical prerequisites, and suggests best practices and recommendations for implementing HP 3PAR Peer Motion and its effectiveness in migrating Oracle RAC and single instance databases. The paper assumes that the reader is familiar with HP 3PAR StoreServ storage architecture and concepts, and has read the HP 3PAR-to-3PAR Storage Peer Motion 2 Guide. Target audience: This paper is intended for solution architects, engineers, IT managers, HP resellers and customers who wish to migrate Oracle RAC and single instance databases across HP 3PAR StoreServ storage systems in an online nondisruptive manner. This white paper describes testing performed in May Terminology used Throughout this paper, we identify the array that contains the Oracle databases to be migrated by HP 3PAR Peer Motion as the source array or the source system. The array to which the Oracle database is migrated is called the destination array or the destination system. Volumes created on an HP 3PAR StoreServ Storage system are called Virtual Volumes, or VVs. The server hosting the Oracle databases that undergo a migration from the source to the destination array is called the host. A zoning operation creates a logical connection in the Fibre Channel (FC) Storage Area Network (SAN) between an FC host bus adapter (HBA) on a server and one on a storage system. Concepts HP 3PAR Peer Motion migrates block data between two HP 3PAR StoreServ systems without the use of an external appliance or host-based mirroring. The HP 3PAR StoreServ systems involved are interconnected via dual FC peer links and redundant SAN switches that serve as the dedicated path for migrating the Oracle RAC and single instance databases between the source and the destination system. 1 Refer to HP 3PAR Peer Motion Online Migration Host Support Matrix at Single Point of Connectivity Knowledge website (SPOCK) for HP Storage Products using HP Passport account credentials. 2 For additional information on HP 3PAR Peer Motion, go to hp.com/go/peermotion. 2

3 HP 3PAR Peer Motion is staged from within the HP 3PAR Management Console (MC), the GUI for managing HP 3PAR StoreServ systems. Coordination of the volume migration between the host and the storage systems is managed by a software wizard running inside the HP 3PAR Management Console. The wizard walks the storage administrator through a series of steps to define the source and destination storage systems, determine the migration type, select the virtual volumes for migration, start the data transfer and monitor it, and, finally, clean up the configuration after the Oracle database migration is completed. The wizard informs the administrator when to make SAN zone changes, if needed. The SAN zone/unzone operations during a Peer Motion operation are executed manually and outside of the Management Console using vendor-specific SAN switch management tools. Starting at the actual data migration step in the wizard, the selected volumes on the source HP 3PAR StoreServ receive a SCSI-3 reservation to prevent them from being migrated twice or mounted to another host. Every source volume under migration is kept up to date during the entire Oracle database migration process. After the migration ends, the migrated source virtual volumes are no longer updated and become stale. Features HP 3PAR Peer Motion is an integrated feature of the HP 3PAR Management Console (MC) version 4.3 and later, so it does not require a separate installation. HP 3PAR Peer Motion is assisted by a software wizard throughout the Oracle RAC and single instance database migration process. The wizard presents a sequence of dialog boxes that leads the storage administrator through a series of well-defined steps to complete a Peer Motion migration. The migration commands issued by the wizard to the source and the destination arrays transmit over an out-of-band path, meaning they are outside the data path between the host and the two arrays. The wizard supplies help information with each screen. The granularity for an Oracle database migration is an entire virtual volume. For an online Oracle RAC and single instance database migration, all the virtual volumes exported to a particular host must migrate together. Executing the Peer Motion wizard requires the administrator to be logged on to the source and destination HP 3PAR StoreServ systems with Super user rights from within the same HP 3PAR MC. The starting point for the wizard is the identification of the source and destination HP 3PAR StoreServ storage systems in the MC. Next, the wizard prompts the administrator to select two unused FC ports on the destination system and reconfigures them as peer ports. If the FC cabling and the SAN zoning between the peer ports on the destination system and two host ports on the source system is installed, the wizard discovers the two host ports on the source and establishes the peer link connectivity between the source and the destination storage systems. The wizard can optionally copy storage settings like domains (and sets of them), hosts (and sets of them), users, and the configuration details for LDAP, NTP, DNS, Syslog, and SNMP between the source and the destination storage systems. This helps administrators to set up the destination storage system in an identical fashion to the source one. When the online migration option is selected in the HP Peer Motion wizard during Oracle RAC and single instance database migration process, it presents a list of hosts 3 from which to choose as the means to select HP 3PAR StoreServ virtual volumes constituting Oracle RAC and single instance databases. All virtual volumes on the source storage system exported to a particular host are migrated simultaneously to the destination storage system. Online Oracle RAC and single instance database migrations offer the opportunity to change the virtual volume provisioning from full to thin, or the reverse, and select a common provisioning group (CPG) for the virtual volumes on the destination storage system that has different characteristics 4 than the originating CPG on the source storage system. After these selections are made, the wizard creates a new host on the source storage system that is, in reality, the destination HP 3PAR StoreServ storage system connected to it via the FC peer links. All virtual volumes on the source storage system set to migrate are exported to this new host. In this way, the source virtual volumes become visible to the destination storage system. Next, the wizard executes the admit of these newly exported virtual volumes on the destination storage system, thereby creating the Peer volumes. A Peer volume is a software structure inside the HP 3PAR OS for a VV of a special kind. A Peer volume has a 1:1 relationship with a VV on the source storage system. Peer volumes are created with RAID 0 protection and the Peer provisioning type, and have the same size and name as their counterpart virtual volume on the source storage system. Initially, a Peer volume consumes no space on the destination HP 3PAR StoreServ, but will eventually hold the data from the source VV and become a thin or fully provisioned VV on the destination system with the correct RAID level. Next, the wizard creates an entry for the Oracle database host under migration in the hosts list on the destination storage system and finally exports the Peer volumes from the destination system to the host under migration. After the necessary zone changes, the Peer Motion wizard places SCSI-3 reservations on the VVs under migration on the 3 Oracle Real Application Cluster (RAC) and single instance nodes configured in setting up Oracle RAC and single instance databases, respectively. 4 This feature provides flexibility in an Oracle database environment to load-balance the I/Os and optimize the HP 3PAR StoreServ storage system by appropriate placement of data on different tiers and moving the most active data to the fastest (and most expensive) tier and idle data to the slowest (and cheapest) tier. 3

4 source storage array. These reservations stay in place after the migration ends as a means of protection, because the source volume is no longer updated after the Oracle database migration ends, rendering it stale in seconds. The SCSI-3 reservation protects the virtual volume from being mounted to the same or another Oracle RAC or single instance node(s). Next, the actual Oracle database migration is started. The migrated virtual volumes do not have to be removed from the source array. After the migration ends, the Peer Motion software optionally cleans up the configuration and reconfigures the peer ports into host ports. The SAN zoning configuration for the peer links needs to be removed manually. The granularity by which Peer Motion moves data is a block of 256 MB. Space for a destination VV is allocated across the physical disks included in the CPG intended for it. No additional temporary space beyond what is needed for the Peer volumes is allocated on the destination storage system during the migration process. Snapshots of a VV configured in Oracle RAC and single instance databases on the source storage system get transferred to the destination storage system; however, the child-parent relationship to the parent VV is not maintained. Peer volumes cannot be subject to snapshots and Physical Copies, nor can they be the source of a Remote Copy operation until the migration has finished. During Oracle RAC and single instance database migration, all reads by the host from the migrating VVs are served by the source HP 3PAR StoreServ. However, writes by the host go to the destination VVs if the block for them was previously migrated. Writes also always go to the source VVs to keep them updated in case the migration should halt for some reason. The WWN of the VV on the destination array is identical to the one on the source array. This way, the default native device mapper multipathing software thinks it is communicating to the virtual volume located on the source array before, during and after the migration. In the Performance Manager section of HP 3PAR Management Console 4.3, a chart showing the throughput for peer ports is available in order to monitor the performance during the entire Oracle database migration process. Technical prerequisites The source array in a Peer Motion operation can be an HP 3PAR E-, S-, F- or T-Class, or an HP 3PAR StoreServ 7000 or system. The source array must run HP 3PAR OS or later. The destination array must run HP 3PAR OS or later because the Peer Motion software is dependent on a command that exists only in this or later versions. This excludes the E- and S-Class systems as a destination for a Peer Motion operation. HP 3PAR StoreServ 7000 systems must run HP 3PAR OS MU1 or later. The HP 3PAR OS version of the source storage system should be earlier or equal to the MU level that is on the destination storage system. The host operating system should be Red Hat Enterprise Linux 5. Note For a detailed list of supported host operating systems, supported storage systems and their HP 3PAR OS versions, refer to HP 3PAR Peer Motion Online Migration Host Support Matrix at Single Point of Connectivity Knowledge website (SPOCK) for HP Storage Products. Host operating systems not included in this list are supported by offline migration, meaning that their VLUNs must be unexported from the host(s) before the Oracle database migration process can start. The Oracle RAC and single instance database nodes need to have multipathing software installed and configured before the data migration can be started see the SPOCK website for a list of supported multipathing packages per supported host OS. Note that the round-robin path selection scheme is required. HP 3PAR Peer Motion requires two unused FC ports on the destination HP 3PAR StoreServ storage system to be configured in the Peer Connection Mode. This configuration is executed by the Peer Motion wizard after the storage administrator selects two eligible FC ports. After becoming a peer port, the WWN of these ports changes from xx:xx:00:02:ac:xx:xx:xx to xx:xx:02:02:ac:xx:xx:xx (the change is underlined). This changed WWN is the one that will be used in the SAN zoning of the peer links. The physical FC ports for these links must be on HBAs located in different, adjacent HP 3PAR StoreServ controller nodes (for example, nodes 0/1 or 2/3). The actual node pair in use on the source and destination array can be unequal. Exactly two peer links are supported on the host ports of the source storage system. The peer links and the SAN zoning for them should stay in place until the Oracle database migration has finished and the cleanup is completed. No peer ports can exist on the source storage system, even if they are unused. The peer ports on the destination array don t require a dedicated HBA; the one or three other FC ports on the HBAs of an HP 3PAR StoreServ T-Class, 7000 or can be used for host connectivity. On the HP 3PAR F-Class, the second port on the 2-port FC HBA must stay unused. Free ports on an HBA used for Remote Copy (RCFC) cannot be configured as peer ports. 5 The testing was performed on Red Hat Enterprise Linux 6.3 (x86-64). 4

5 The peer links run over redundant SAN switches; direct FC connectivity is not supported. The peer links between the arrays are dedicated to the Peer Motion migration operation. You can reuse the peer ports as host ports after completing the Oracle RAC and single instance database migration by changing their type. Use cases in Oracle RAC and single instance database environment HP 3PAR Peer Motion has a number of use cases in Oracle RAC and single instance database environment that illustrate its business value. The most important use cases are listed below. Zero downtime technology refresh Technology refreshes have always been a challenging activity in Oracle database environment. Customers are faced with the choice of either staying on their old storage systems or refreshing their technology with forced downtime. Obviously, neither is a good solution. HP 3PAR Peer Motion 6 acts as an enabler for the customers to move Oracle RAC and single instance databases from their current HP 3PAR StoreServ storage systems to new ones in an online nondisruptive manner. GUI-assisted Oracle database migration Some data migration tools run entirely or in part from the command line, which makes them hard to use. However, HP 3PAR Peer Motion offers a 100 percent graphically oriented user interface within the familiar environment of the HP 3PAR Management Console. With all the prerequisites checked in the background by the Peer Motion wizard, storage administrators of all levels can now experience ease of use when migrating Oracle databases. Appliance-less federated data migration Appliances brought into the data center, even for brief periods, are a concern for IT managers with respect to security, LAN and SAN connectivity, power, performance and management. Many commercial solutions available today for Oracle database migration require the installation of an appliance that shares SAN bandwidth with other arrays and applications. HP 3PAR Peer Motion does not depend on new technology layers or extra tools, nor does it require complex planning. The source and destination arrays are interconnected in a peer fashion using a dedicated set of FC links between them. Consequently, there is no disruption to other SAN traffic or to the Ethernet networking environment during setup, data transfer and removal of HP 3PAR Peer Motion. Federated array workload management Until recently, the limits of disk array technology centered on capacity and performance. HP 3PAR StoreServ Storage supports more than 2 PB of capacity and delivers industry-leading SPC-1 performance. Efficiently managing unpredictable and dynamic workloads like OLTP and DSS at the data center level on a 24x7 basis is a new challenge in Oracle database environments. Introducing this level of agility requires shifting workload data to a storage system that is less loaded or contains the right type of physical disks. HP 3PAR Peer Motion customers can load balance their workloads across the available storage systems at will, resulting in cost-optimized asset lifecycle management. Effectiveness of HP 3PAR Peer Motion in Oracle database environments This section discusses the effectiveness of HP 3PAR Peer Motion in Oracle RAC and single-instance database environments running OLTP and DSS workloads. HP 3PAR Peer Motion configuration This topic describes and illustrates the workflow of a Peer Motion migration. The workflow can be subdivided into five distinct phases. Some figures in this section illustrate the required FC connectivity between the Oracle database host(s), the source array and the destination array at different phases of the migration. The other figures show screenshots of the Peer Motion wizard while outlining some of the steps in the software. The workflow and the screenshots are for the online migration of virtual volumes used in setting up Oracle RAC and single instance databases with Oracle Automatic Storage Management (ASM). 6 Refer to HP 3PAR Peer Motion Online Migration Host Support Matrix at Single Point of Connectivity Knowledge website (SPOCK) for HP Storage Products. 5

6 The SAN zoning starting point for an online Oracle RAC and single instance database migration is depicted in Figure 1. As shown in the figure, the source HP 3PAR StoreServ storage system and the host(s) are zoned to each other in a dual fabric SAN. The destination HP 3PAR StoreServ storage system has completed its initialization procedure and is operational. Figure 1. SAN zoning layout at the start of an online Oracle RAC (upper drawing) and single instance (lower drawing) database migration In the first phase of an Oracle RAC and single instance database migration using HP 3PAR Peer Motion, the destination array is brought into place and the two unused FC ports on it are selected to become peer type ports for their Connection Mode. 6

7 These two peer ports are physically interconnected using FC cables and redundant SAN switches and are zoned to two host ports 7 on the source array. Figure 2 shows the interconnection between the two arrays after the FC cables and the zoning between the host ports on the source array and the peer ports on the destination array are installed. Both arrays are logged on to in the same Management Console using Super rights credentials. Figure 2. Interlinking the source and destination arrays over the peer links for an online Oracle RAC (upper drawing) and single instance (lower drawing) database migration 7 The host ports can be in use already for connectivity to other Oracle database nodes. 7

8 In the second phase of the migration process, the Peer Motion wizard is launched in the HP 3PAR Management Console (MC). The wizard is started by selecting the Peer Motion tab in the Manager Pane of the MC, followed by clicking the Create PM Configuration link in the Common Actions panel. Figure 3 shows the wizard startup link in blue in the Common Actions panel. Figure 3. The Manager Pane and the Common Actions panel for the Peer Motion tab in HP 3PAR MC The Peer Motion wizard offers a graphical tool to identify and select the source and destination arrays from a list of available arrays. It applies the selection criteria that were described in the Technical prerequisites section for choosing a source and destination array. Figures 4 and 5 illustrate the selection of the source and destination HP 3PAR StoreServ storage systems. Figure 4. Dropdown list for selecting the source HP 3PAR StoreServ storage system in a Peer Motion configuration setup 8

9 Figure 5. The wizard screen after the source and destination storage systems are selected Clicking the Next button (not shown) at the bottom of the screen in Figure 5 advances the wizard to the section where the peer ports and the host ports are selected to form the peer links between the source and the destination storage systems. At the start of this process, the wizard graphically displays the ports on the destination storage system that are eligible to become peer ports. Figure 6 shows the screen from where this selection is made. The destination storage system is shown as the right blue rectangle in the figure. Figure 6. Choosing a peer port and its companion pairing port on the destination HP 3PAR StoreServ storage system The administrator right-clicks on an available port on the blue destination system graphic and then selects the pairing port that will become the second peer port. In Figure 6, port 1:5:4 is chosen and paired with port 0:5:3. Refer to the peer port selection rules listed in the Technical prerequisites section that must be applied during this selection. Next, the details of these two ports are displayed in the two tables shown on the right side of the screen in Figure 6. No changes should be made to these details. To start the reconfiguration of the ports into peer ports, the administrator just needs to click the Apply button. The progress on the peer port configuration is shown in a popup screen, which is shown in Figure 7. 9

10 Figure 7. Peer port configuration status Note that the reconfiguration of the ports to the Peer Connection Mode takes the ports offline for a short time, halting all I/O over them. This is of no importance because the peer ports must be unused at the time of their selection. When both selected ports are reconfigured, the WWN of the peer ports is shown, as can be seen in Figure 7. This WWN differs from the WWN that was in place before the reconfiguration. It is the new WWN shown in Figure 7 that must be zoned in the SAN fabric switches. The port ID and the WWN of all available host ports on the source system are shown below them. Two of these host ports should be connected and zoned to the peer ports on the destination storage system. Click the OK button, shown at the lower right in Figure 7, to exit out of this screen and proceed. Assuming the cabling and the zoning for the peer links are in place, the wizard determines which host ports are in the peer link zones and displays connecting orange lines between the zoned ports as shown in Figure 8. Note If the cabling or the zoning between the host and the peer ports is not correct, the orange lines will not be shown as they are in Figure 8. In this case, you need to cancel the Peer Motion configuration, troubleshoot the cabling and the zoning layout and restart the wizard from the point shown in Figure 3. 10

11 Figure 8. Peer links configured between the source and the destination HP 3PAR StoreServ storage systems To complete the Peer Motion configuration phase, click the Finish button (not shown) at the bottom of the screen in Figure 8. Information about the Peer Motion configuration setup can be accessed at any time by selecting the Ports option of the Management Tree in the Management Console shown in Figure 9. The port numbers in use for the peer links, their WWNs, some details about them and the current throughput per port are shown. Figure 9. Peer Motion Configuration information for Peer Ports In the third phase of a Peer Motion operation, you can optionally transfer settings and configuration information such as domains and users from the source to the destination storage system. This is done by clicking the option Copy Storage Settings and Configuration in the Common Actions panel for Peer Motion in the Management Console shown in Figure 10. In this way, the destination storage system can be set up quickly, duplicating part or all of the source storage system settings. A menu (not shown) allows you to select the settings that need to be migrated. This transfer of settings and configuration information can be skipped. 11

12 Figure 10. Third Phase: The Common Actions panel for Peer Motion in the Management Console Phase 3 marks the end of the preparation stage for a Peer Motion operation before starting Oracle database migration. Note You can remove the Peer Motion configuration setup at any time by clicking Remove PM Configuration in the Common Actions panel for Peer Motion in the Management Console shown in Figure 10. This will reconfigure the peer ports back to host ports on the destination storage system. The SAN fabric zones created for the peer links must be removed manually using the appropriate vendor-specific SAN switch management tool. Any object created in Phase 3 by the wizard on the destination array must be removed manually in order to revert back to the original configuration. Online non-disruptive migration of Oracle Database 11gR2 RAC instances in OLTP workload environment Refer to section HP 3PAR Peer Motion configuration for the details on peer motion workflow. Table 1 shows the VV and Oracle ASM disk configuration details used to set up Oracle 11gR2 RAC database instance pmracdb1 of size 512 GiB and running the OLTP workload with 100 users: Table 1. HP 3PAR StoreServ VV and Oracle ASM disk configuration VV Name Size of each VV (in GiB) CPG Provisioning PM-FC-R6.OGI 100 FC-R6-CPG Full PM-FC-R6.0/3 500 FC-R6-CPG Full PM-FC-R1.0/3 500 FC-R1-CPG Full Oracle ASM Disk Group Redundancy Size (in GiB) OGI External 100 DATA External 2000 LOG External

13 The Oracle Grid Infrastructure (OGI) voting disk has been configured during the installation of OGI on a separate Oracle ASM disk group OGI. The Oracle 11gR2 RAC database instance pmracdb1 has been set up using the Oracle ASM disk groups DATA and LOG for housing Oracle data files and redo-log files, respectively. Phase 4 of the Peer Motion migration selects the virtual volumes on the HP 3PAR StoreServ storage system constituting the Oracle Grid Infrastructure and Oracle 11gR2 RAC databases that will be migrated on the destination HP 3PAR StoreServ storage system, and then executes the actual Oracle database migration. This phase is started by clicking Migrate Data in the Common Actions panel for Peer Motion in the Management Console shown in Figure 10. Next, the screen shown in Figure 11 is displayed. Figure 11. Oracle 11gR2 RAC database migration phase: wizard for selecting the migration type and Oracle 11gR2 RAC database hosts As a first step, select the type of migration as Online Migration, and then pick the name of Oracle 11gR2 RAC database nodes as Linked Hosts, listed in the middle of Figure 12 as the way to select the virtual volumes to be migrated. All volumes exported from the source storage system to the selected hosts will be migrated. Once Oracle 11gR2 RAC database nodes are selected, the names and some characteristics of the virtual volumes to be migrated will be displayed in the bottom part of the screen shown in Figure 12. In this particular migration, nine 8 fully provisioned virtual volumes exported to two Oracle 11gR2 RAC database nodes will be migrated online nondisruptively. 8 One VV constituting Oracle Grid Infrastructure (OGI) voting disk, four VVs for Oracle data files and four VVs for redo-log files. 13

14 Figure 12. Selecting the Oracle 11gR2 RAC database nodes for an online migration After clicking the Next button in Figure 12 (not shown) to proceed, you can now configure the allocation settings on the destination system for each virtual volume that will be migrated. The wizard screen for this is shown in Figure 13 and contains three steps. First, highlight the virtual volumes shown in Figure 13, and then choose the provisioning type for them. The provisioning type and the CPG type on the destination storage system can be specified per volume or for a group of highlighted virtual volumes. The User CPG and the Copy CPG can be left blank if Same as Source was selected for the provisioning. When you ve completed this process, click the Add button shown near the bottom left of the screen. Figure 13 shows a scenario where the allocation settings for five virtual volumes were already defined. All the virtual volumes must be moved out of the list shown at the top left of Figure 13 by clicking the Add button, before the Next button in the figure becomes active. When no more volumes are left in the list, proceed by clicking this Next button. Figure 13. Selecting the allocation settings on the destination HP 3PAR StoreServ storage system per VV to be migrated Based on the selections made, the Peer Motion wizard now starts the preparation phase that precedes the actual Oracle 11gR2 RAC database migration. While in the preparation phase, the wizard shows a moving blue progress bar as can be seen in Figure

15 Figure 14. The Peer Motion wizard running the preparation phase In the preparation phase, the wizard creates the Peer volumes. For every virtual volume on the source HP 3PAR StoreServ storage system that is to be migrated, a Peer volume is created on the destination HP 3PAR StoreServ storage system. As shown in Figure 15, the Peer volumes are created in RAID 0 with a provisioning type of Peer. The size and the name of a Peer volume on the destination HP 3PAR StoreServ storage system is the same as for the virtual volume under migration on the source HP 3PAR StoreServ storage system. At this point in the process, the Peer volumes do not contain any data. Figure 15. The Peer Volumes on the destination HP 3PAR StoreServ storage system In the next step in the preparation phase for the Oracle 11gR2 RAC database migration, the wizard checks the zoning layout between the hosts and both storage arrays. For an online migration, the required zoning layout needs to move from what is shown in Figure 2 to what is shown in Figure 16, and then to what is shown in Figure 17, in this order. From Figure 16, the zone change enabled FC connectivity from the hosts to both the source and the destination HP 3PAR StoreServ storage systems. The multipathing software 9 on the hosts manages all four paths connected to it. Next, the Oracle 11gR2 RAC database hosts are unzoned from the source HP 3PAR StoreServ storage system. The SAN fabric zoning layout that is in place after this change is shown in Figure Native Device mapper multipathing (DM-Multipath) software configured on the RHEL 6.3 Oracle 11gR2 RAC database nodes. 15

16 Note Shutting down the Oracle 11gR2 RAC database nodes is not required during these zone changes. For an online migration of Oracle 11gR2 RAC database instances, it is important to execute the zone changes shown in Figures 16 and 17 in the order shown to keep I/O flowing during the OLTP workload environment. Figure 16. Intermediate layout of the FC interconnection between the hosts, the source and the destination HP 3PAR StoreServ Figure 17. Intermediate layout of the FC interconnection between the hosts, the source and the destination HP 3PAR StoreServ 16

17 If the SAN fabric zoning layout for starting the actual Peer Motion Oracle database migration is not deemed correct, the wizard will display a popup window titled Migrate Data Confirmation, as shown in Figure 18. The window is for an online migration of the Oracle 11gR2 RAC database instance. Figure 18. Making necessary SAN fabric zone changes for an online Oracle 11gR2 RAC database migration using Peer Motion The red text shown at the bottom of Figure 18 indicates an error condition. Until this condition is resolved, the Continue button shown in the figure is not active. Following the instructions in the middle of the popup window will establish the correct SAN fabric zoning configuration. After correcting the problem, click the Verify button shown in Figure 18. This will result in a message indicating The system is zoned correctly. Clicking the Continue button will start the Oracle 11gR2 RAC database migration on the destination HP 3PAR StoreServ storage system. Note At this point, each of the migrating source HP 3PAR StoreServ storage system virtual volumes will get a SCSI-3 reservation, which can be verified from the HP 3PAR CLI using the command showrsv -l scsi3, as shown in Figure 19. Figure 19. HP 3PAR CLI command showrsv showing SCSI-3 reservations on the migrating source HP 3PAR StoreServ storage system VVs 17

18 The data transfer throughput and progress can be viewed in multiple locations. The red bar graph 10 at the bottom of Figure 20 shows the Total Data Throughput per peer port in an absolute number in KB/s and as a percentage of the port s speed. Figure 20. Data throughput per peer port on the destination HP 3PAR StoreServ storage system Historical information on the throughput per peer port in the Management Console can be obtained by selecting Performance & Reports in the Management Pane shown in Figure 21 and clicking the New Chart link in the Common Actions below it. Figure 21. The Common Actions panel for Performance & Reports in the Management Console 10 Obtained by selecting the Ports option of the Management Tree in the Peer motion Configuration using HP 3PAR Management Console. 18

19 In the New Chart wizard that opens, select Peer Ports Total Throughput ; modify the name, description and polling interval, if desired, and click the Next button near the bottom of the screen. Figure 22 shows a screenshot of this point in the process. Figure 22. Graphical representation of the historical throughput configuration for the peer ports on the destination HP 3PAR StoreServ storage system In a next step, select the destination HP 3PAR StoreServ storage system, highlight both the peer ports, click Next, and finish the wizard. Figure 23 shows the graphs for both peer ports over the course of a few minutes. The granularity of the data points is 5 seconds by default and can be changed to a larger value. Data points are averages over the polling interval. The vertical axis in Figure 23 shows the averaged data points in KB/s. Figure 23. Graphical representation of the historical throughput configuration for the peer ports on the destination system 19

20 Every virtual volume migration runs as a separate task 11 inside the HP 3PAR OS on the destination HP 3PAR StoreServ storage system. An unlimited number of migration tasks can be submitted by the Peer Motion wizard, but only nine of them run in parallel. The other migration tasks are queued in HP 3PAR OS. Whenever an active migration task finishes, the queued task for another volume starts automatically. The HP 3PAR CLI command statport -peer delivers this same information in numerical format and includes information about the queue length on the peer ports, their service time and the number of I/Os per second, and their I/O size. This information updates on screen every two seconds, and also the update frequency can be changed. Figure 24 shows a screenshot of the output for this command. Figure 24. Output of the HP 3PAR CLI command statport -peer" showing information on the peer ports The throughput information can also be obtained by monitoring the ports on the SAN fabric switches in the data path of the peer links. Figure 25, taken from the Management Console, shows the list of virtual volumes on the destination HP 3PAR StoreServ storage system with the transfer of all volumes completed. As shown in the figure, a total of nine virtual volumes from the source array were successfully imported on the destination array. Their provisioning type and RAID level are correctly displayed. Figure 25. List of Virtual Volumes in the Management Console after the Oracle 11gR2 RAC database migration is completed for all the virtual volumes Note The WWN of the imported virtual volumes on the destination array remains the same as for the virtual volumes on the source system. 11 HP 3PAR CLI command showtask shows the Peer Motion migration task status. 20

21 In the fifth and final phase of a Peer Motion migration, you will clean up some configuration actions on the source array by clicking the link for Post Migration Cleanup in the Common Actions section for Peer Motion shown in Figure 10. The cleanup process cancels the export of the migrated virtual volumes from the source to the destination HP 3PAR StoreServ storage system over the peer links and removes the host definition of the destination array on the source HP 3PAR StoreServ storage system. This cleanup is mandatory before another migration can start. Note If no more virtual volumes have to be migrated from the source HP 3PAR StoreServ storage system, you can remove the peer links setup by clicking the link Remove PM Configuration in the Common Actions section for Peer Motion shown in Figure 10. This will reconfigure the peer ports on the destination HP 3PAR StoreServ storage system into host ports again. The host port-peer port zoning between both arrays can now be removed, leading to the final zoning situation shown in Figure 26. The FC cables for the peer links between the source and the destination arrays can be removed as well. Figure 26. FC interconnection layout after the peer links cleanup The migrated virtual volumes can be removed from the source array at this time, and their space can be reutilized. When all the virtual volumes have been migrated to the destination array, the array can be reinitialized for use or decommissioned. Online non-disruptive migration of Oracle Database 11gR2 RAC instances in DSS workload environment Refer to section HP 3PAR Peer Motion configuration for the details on Peer Motion workflow. 21

22 Table 2 shows the VV and Oracle ASM disk configuration details used to set up Oracle 11gR2 RAC database instance pmracdb2 of size 512 GiB and running the DSS workload with 100 users. Table 2. HP 3PAR StoreServ VV and Oracle ASM disk configuration VV Name Size of each VV (in GiB) CPG Provisioning PM-FC-R6.OGI 100 FC-R6-CPG Full PM-FC-R6-DSS.0/3 500 FC-R6-CPG Full Oracle ASM Disk Group Redundancy Size (in GiB) OGI External 100 DATA_LOG_DSS External 2000 The Oracle Grid Infrastructure (OGI) voting disk has been configured during the installation of OGI on a separate Oracle ASM disk group OGI. The Oracle 11gR2 RAC database instance pmracdb2 has been setup using the Oracle ASM disk groups DATA_LOG_DSS for housing Oracle data and redo-log files. Phase 4 of the Peer Motion migration selects the virtual volumes on the HP 3PAR StoreServ storage system constituting the Oracle 11gR2 RAC databases that will be migrated on the destination HP 3PAR StoreServ storage system, and then executes the actual Oracle database migration. This phase is started by clicking Migrate Data shown in Figure 10. Next, the screen shown in Figure 11 is displayed. Select the type of migration as Online Migration, and then pick the name of Oracle 11gR2 RAC database nodes as Linked Hosts, as the way to select the virtual volumes that will be migrated. All volumes exported from the source storage system to the selected hosts will be migrated. Once Oracle 11gR2 RAC database nodes are selected, the names and some characteristics of the virtual volumes to be migrated will be displayed in the bottom part of the screen as shown in Figure 27. In this particular migration, four 12 fully provisioned virtual volumes exported to two Oracle 11gR2 RAC database nodes will be migrated online nondisruptively. Figure 27. Selecting the Oracle 11gR2 RAC database nodes for an online migration shows the virtual volumes on the source HP 3PAR StoreServ storage system that will be migrated nondisruptively 12 For constituting Oracle data and redo-log files. The Oracle Grid Infrastructure (OGI) voting disk has already been migrated during the migration of the OLTP workload environment. 22

23 After clicking the Next button in Figure 27 (not shown) to proceed, you can now configure the allocation settings on the destination system for each virtual volume that will be migrated. The wizard screen for this is shown in Figure 28 and contains three steps. First, highlight the virtual volumes shown in Figure 28, and then choose the provisioning type for them. The provisioning type and the CPG type on the destination storage system can be specified per volume or for a group of highlighted virtual volumes. When you ve completed this process, click the Add button shown near the bottom left of the screen. Figure 28 shows a scenario where the allocation settings for the four virtual volumes were already defined. All the virtual volumes must be moved out of the list shown at the top left of Figure 28 by clicking the Add button, before the Next button (not shown) in the figure becomes active. When no more volumes are left in the list, proceed by clicking this Next button. Figure 28. Selecting the allocation settings on the destination HP 3PAR StoreServ storage system per VV to be migrated Based on the selections made, the Peer Motion wizard now starts the preparation phase that precedes the actual Oracle 11gR2 RAC database migration. For every virtual volume on the source HP 3PAR StoreServ storage system that is to be migrated, a Peer volume is created on the destination HP 3PAR StoreServ storage system. As shown in Figure 29, the Peer volumes are created in RAID 0 with a provisioning type of Peer. The size and the name of a Peer volume on the destination HP 3PAR StoreServ storage system is the same as for the virtual volume under migration on the source HP 3PAR StoreServ storage system. At this point in the process, the Peer volumes do not contain any data. Figure 29. The Peer Volumes on the destination HP 3PAR StoreServ storage system In the next step in the preparation phase for the Oracle 11gR2 RAC database migration, the wizard checks the zoning layout between the hosts and both storage arrays. For an online migration, the required zoning layout needs to move from what is shown in Figure 2 to what is shown in Figure 16 and then to what is shown in Figure 17, in this order. From Figure 16, the zone change enabled FC connectivity from the hosts to both the source and the destination HP 3PAR StoreServ storage systems. The multipathing software 13 on the hosts manages all four paths connected to it. Next, the Oracle 11gR2 RAC 13 Native Device mapper multipathing (DM-Multipath) software configured on the RHEL 6.3 Oracle 11gR2 RAC database nodes. 23

24 database hosts are unzoned from the source HP 3PAR StoreServ storage system. The SAN fabric zoning layout in place after this change is shown in Figure 17. Note Shutting down the Oracle 11gR2 RAC database nodes is not required during these zone changes. For an online migration of Oracle 11gR2 RAC database instances, it is important to execute the zone changes shown in Figures 16 and 17 in the order shown to keep I/O flowing during the DSS workload environment. The data transfer throughput and progress can be viewed in multiple locations. The red bar graph at the bottom of Figure 30 shows the Total Data Throughput per peer port in an absolute number in KB/s and as a percentage of the port s speed. Figure 30. Data throughput per peer port on the destination HP 3PAR StoreServ storage system Historical information on the throughput per peer port in the Management Console can be obtained by selecting Performance & Reports in the Management Pane shown in Figure 21 and clicking the New Chart link in the Common Actions below it. In the New Chart wizard that opens, select Peer Ports Total Throughput; modify the name, description and polling interval, if desired, and click the Next button near the bottom of the screen. In a next step, select the destination HP 3PAR StoreServ storage system, highlight both the peer ports, click Next and finish the wizard. Figure 31 shows the graphs for both peer ports over the course of a few minutes. The granularity of the data points is 5 seconds by default and can be changed to a larger value. Data points are averages over the polling interval. The vertical axis in Figure 31 shows the averaged data points in KB/s. 24

25 Figure 31. Graphical representation of the historical throughput configuration for the peer ports on the destination HP 3PAR StoreServ storage system The HP 3PAR CLI command statport -peer delivers this same information in numerical format and includes information about the queue length on the peer ports, their service time and the number of I/Os per second, and their I/O size. This information updates on screen every two seconds, and also the update frequency can be changed. Figure 32 shows a screenshot of the output for this command. Figure 32. Output of the HP 3PAR CLI command statport -peer" showing information on the peer ports The throughput information can also be obtained by monitoring the ports on the SAN fabric switches in the data path of the peer links. 25

26 Figure 33, taken from the Management Console, shows the list of virtual volumes on the destination HP 3PAR StoreServ storage system with the transfer of all volumes completed. As shown in the figure, a total of four virtual volumes from the source array were successfully imported on the destination array. Their provisioning type and RAID level are correctly displayed. Figure 33. List of Virtual Volumes in the Management Console after the Oracle 11gR2 RAC database migration is completed Note The WWN of the imported virtual volumes on the destination array remains the same as for the virtual volumes on the source system. In the fifth and final phase of a Peer Motion migration, you will clean up some configuration actions on the source array by clicking the link for Post Migration Cleanup in the Common Actions section for Peer Motion shown in Figure 10. The cleanup process cancels the export of the migrated virtual volumes from the source to the destination HP 3PAR StoreServ storage system over the peer links and removes the host definition of the destination array on the source HP 3PAR StoreServ storage system. This cleanup is mandatory before another migration can start. The migrated virtual volumes can be removed from the source array at this time, and their space can be reutilized. When all the virtual volumes have been migrated to the destination array, the array can be reinitialized for use or decommissioned. Online non-disruptive migration of Oracle Database 10gR2 single instances in OLTP workload environment Refer to section HP 3PAR Peer Motion configuration for the details on Peer Motion workflow. Table 3 shows the VV and Oracle ASM disk configuration details used to set up Oracle 10gR2 single instance database instance pmsidb3 of size 512 GiB and running the OLTP workload with 100 users. Table 3. HP 3PAR StoreServ VV and Oracle ASM disk configuration VV Name Size of each VV (in GiB) CPG Provisioning PM-FC-R6-4/7 500 FC-R6-CPG Full PM-FC-R1-OLTP-10gR2.0/1 500 FC-R1-CPG Full Oracle ASM Disk Group Redundancy Size (in GiB) OGI External 100 DATA_SI External 2000 LOG_SI External 1000 Oracle 10gR2 single instance database pmsidb3 has been set up using the Oracle ASM disk groups DATA_SI for housing Oracle data files and LOG_SI for redo-log files. 26

27 Phase 4 of the Peer Motion migration selects the virtual volumes on the HP 3PAR StoreServ storage system constituting the Oracle 10gR2 single instance databases that will be migrated on the destination HP 3PAR StoreServ storage system, and then executes the actual Oracle database migration. This phase is started by clicking Migrate Data shown in Figure 10. Next, the screen shown in Figure 11 is displayed. Select the type of migration as Online Migration and then pick the name of Oracle 10gR2 single instance database host as Host, as the way to select the virtual volumes to be migrated. All volumes exported from the source storage system to the selected hosts will be migrated. Once Oracle 10gR2 single instance database host is selected, the names and some characteristics of the virtual volumes to be migrated will be displayed in the bottom part of the screen shown in Figure 34. In this particular migration, six 14 fully provisioned virtual volumes exported to Oracle 10gR2 single instance database host will be migrated online nondisruptively. Figure 34. Selecting the Oracle 10gR2 single instance database host for an online migration 14 Four VVs for constituting Oracle data files and two VVs for redo-log files 27

28 After clicking the Next button in Figure 34 (not shown) to proceed, you can now configure the allocation settings on the destination system for each virtual volume to be migrated. The wizard screen for this is shown in Figure 35 and contains three steps. First, highlight the virtual volumes shown in Figure 35, and then choose the provisioning type for them. The provisioning type and the CPG type on the destination storage system can be specified per volume or for a group of highlighted virtual volumes. When you ve completed this process, click the Add button shown near the bottom left of the screen. Figure 35 shows a scenario where the allocation settings for the two virtual volumes were already defined. All the virtual volumes must be moved out of the list shown at the top left of Figure 35 by clicking the Add button, before the Next button in the figure becomes active. When no more volumes are left in the list, proceed by clicking this Next button. Figure 35. Selecting the allocation settings on the destination HP 3PAR StoreServ storage system per VV to be migrated Based on the selections made, the Peer Motion wizard now starts the preparation phase that precedes the actual Oracle 10gR2 single instance database migration. For every virtual volume on the source HP 3PAR StoreServ storage system that is to be migrated, a Peer volume is created on the destination HP 3PAR StoreServ storage system. As shown in Figure 36, the Peer volumes are created in RAID 0 with a provisioning type of Peer. The size and the name of a Peer volume on the destination HP 3PAR StoreServ storage system is the same as for the virtual volume under migration on the source HP 3PAR StoreServ storage system. At this point in the process, the Peer volumes do not contain any data. Figure 36. The Peer Volumes on the destination HP 3PAR StoreServ storage system In the next step in the preparation phase for the Oracle 10gR2 single instance database migration, the wizard checks the zoning layout between the hosts and both storage arrays. For an online migration, the required zoning layout needs to move from what is shown in Figure 2 to what is shown in Figure 37 and then to what is shown in Figure 38, in this order. In Figure 37, the zone change enabled FC connectivity from the hosts to both the source and the destination HP 3PAR StoreServ storage systems. The multipathing software 15 on the hosts manages all four paths connected to it. Next, the 15 Native Device mapper multipathing (DM-Multipath) software configured on the RHEL 6.3 Oracle 10gR2 single instance database host 28

29 Oracle 10gR2 single instance database host is unzoned from the source HP 3PAR StoreServ storage system. The SAN fabric zoning layout that is in place after this change is shown in Figure 38. Note Shutting down the Oracle 10gR2 single instance database host is not required during these zone changes. For an online migration of Oracle 10gR2 single instance databases, it is important to execute the zone changes shown in Figures 37 and 38, in the order shown, to keep I/O flowing during the OLTP workload environment. Figure 37. Intermediate layout of the FC interconnection between the hosts, the source and the destination HP 3PAR StoreServ Figure 38. Intermediate layout of the FC interconnection between the hosts, the source and the destination HP 3PAR StoreServ 29

30 Note At this point, each of the migrating source HP 3PAR StoreServ storage system virtual volumes will get an SCSI-3 reservation, which can be verified from the HP 3PAR CLI using the command showrsv -l scsi3, as shown in Figure 39. Figure 39. HP 3PAR CLI command showrsv showing SCSI-3 reservations on the migrating source HP 3PAR StoreServ storage system VVs The red bar graph at the bottom of Figure 40 shows the Total Data Throughput per peer port in an absolute number in KB/s and as a percentage of the port s speed. Figure 40. Data throughput per peer port on the destination HP 3PAR StoreServ storage system Historical information on the throughput per peer port in the Management Console can be obtained by selecting Performance & Reports in the Management Pane shown in Figure 21 and clicking the New Chart link in the Common Actions below it. In the New Chart wizard that opens, select Peer Ports Total Throughput ; modify the name, description and polling interval, if desired, and click the Next button near the bottom of the screen. In a next step, select the destination HP 3PAR StoreServ storage system, highlight both the peer ports, click Next and finish the wizard. Figure 41 shows the graphs for both peer ports over the course of a few minutes. The granularity of the data points is 5 seconds by default and can be changed to a larger value. Data points are averages over the polling interval. The vertical axis in Figure 41 shows the averaged data points in KB/s. 30

31 Figure 41. Graphical representation of the historical throughput configuration for the peer ports on the destination HP 3PAR StoreServ storage system The HP 3PAR CLI command statport -peer delivers this same information in numerical format and includes information about the queue length on the peer ports, their service time and the number of I/Os per second, and their I/O size. Also, the throughput information can be obtained by monitoring the ports on the SAN fabric switches in the data path of the peer links. Figure 42, taken from the Management Console, shows the list of virtual volumes on the destination HP 3PAR StoreServ storage system with the transfer of all volumes completed. As shown in the figure, a total of six virtual volumes from the source array were imported successfully on the destination array. Their provisioning type and RAID level are correctly displayed. Figure 42. List of Virtual Volumes in the Management Console after the Oracle 10gR2 single instance database migration is completed for all the virtual volumes Note The WWN of the imported virtual volumes on the destination array remains the same as for the virtual volumes on the source system. In the fifth and final phase of a Peer Motion migration, you will clean up some configuration actions on the source array by clicking the link for Post Migration Cleanup in the Common Actions section for Peer Motion shown in Figure 10. The cleanup process cancels the export of the migrated virtual volumes from the source to the destination HP 3PAR StoreServ storage system over the peer links and removes the host definition of the destination array on the source HP 3PAR StoreServ storage system. This cleanup is mandatory before another migration can start. The migrated virtual volumes can be removed from the source array at this time, and their space can be reutilized. When all the virtual volumes have been migrated to the destination array, the array can be reinitialized for use or decommissioned. 31

HP EVA to 3PAR Online Import for EVA-to-3PAR StoreServ Migration

HP EVA to 3PAR Online Import for EVA-to-3PAR StoreServ Migration Technology Insight Paper HP EVA to 3PAR Online Import for EVA-to-3PAR StoreServ Migration By Leah Schoeb December 3, 2012 Enabling you to make the best technology decisions HP EVA to 3PAR Online Import

More information

Compellent Storage Center

Compellent Storage Center Compellent Storage Center Microsoft Multipath IO (MPIO) Best Practices Guide Dell Compellent Technical Solutions Group October 2012 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY

More information

HP 3PAR Online Import for HDS Storage 1.3.0 Data Migration Guide

HP 3PAR Online Import for HDS Storage 1.3.0 Data Migration Guide HP 3PAR Online Import for HDS Storage 1.3.0 Data Migration Guide Abstract This guide provides information on using HP 3PAR Online Import for HDS Storage software to migrate data from an HDS Storage system

More information

Direct Attached Storage

Direct Attached Storage , page 1 Fibre Channel Switching Mode, page 1 Configuring Fibre Channel Switching Mode, page 2 Creating a Storage VSAN, page 3 Creating a VSAN for Fibre Channel Zoning, page 4 Configuring a Fibre Channel

More information

vrealize Operations Manager Customization and Administration Guide

vrealize Operations Manager Customization and Administration Guide vrealize Operations Manager Customization and Administration Guide vrealize Operations Manager 6.0.1 This document supports the version of each product listed and supports all subsequent versions until

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

Cisco Active Network Abstraction Gateway High Availability Solution

Cisco Active Network Abstraction Gateway High Availability Solution . Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and

More information

vsphere Replication for Disaster Recovery to Cloud

vsphere Replication for Disaster Recovery to Cloud vsphere Replication for Disaster Recovery to Cloud vsphere Replication 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Drobo How-To Guide. Topics. What You Will Need. Configure Windows iscsi Multipath I/O (MPIO) with Drobo iscsi SAN

Drobo How-To Guide. Topics. What You Will Need. Configure Windows iscsi Multipath I/O (MPIO) with Drobo iscsi SAN Multipath I/O (MPIO) enables the use of multiple iscsi ports on a Drobo SAN to provide fault tolerance. MPIO can also boost performance of an application by load balancing traffic across multiple ports.

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014 Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products IBM Systems and Technology Group ISV Enablement January 2014 Copyright IBM Corporation, 2014 Table of contents Abstract...

More information

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions

More information

Introduction to Hyper-V High- Availability with Failover Clustering

Introduction to Hyper-V High- Availability with Failover Clustering Introduction to Hyper-V High- Availability with Failover Clustering Lab Guide This lab is for anyone who wants to learn about Windows Server 2012 R2 Failover Clustering, focusing on configuration for Hyper-V

More information

vsphere Replication for Disaster Recovery to Cloud

vsphere Replication for Disaster Recovery to Cloud vsphere Replication for Disaster Recovery to Cloud vsphere Replication 5.8 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Deploying Windows Streaming Media Servers NLB Cluster and metasan

Deploying Windows Streaming Media Servers NLB Cluster and metasan Deploying Windows Streaming Media Servers NLB Cluster and metasan Introduction...................................................... 2 Objectives.......................................................

More information

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering Multipathing I/O (MPIO) enables the use of multiple iscsi ports on a Drobo SAN to provide fault tolerance. MPIO can also boost performance of an application by load balancing traffic across multiple ports.

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster #1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the

More information

Table of Contents. Right Pane of Drive Map... 11 Legend Tab... 12 File Types Shown on the Legend Tab... 12

Table of Contents. Right Pane of Drive Map... 11 Legend Tab... 12 File Types Shown on the Legend Tab... 12 Table of Contents Introduction to PerfectStorage... 1 The PerfectStorage Solution to Reclaiming Thin Provisioned Disk Space... 1 PerfectStorage Features... 1 Installing PerfectStorage... 3 PerfectStorage

More information

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...

More information

VMware Site Recovery Manager with EMC RecoverPoint

VMware Site Recovery Manager with EMC RecoverPoint VMware Site Recovery Manager with EMC RecoverPoint Implementation Guide EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com Copyright

More information

Clustering ExtremeZ-IP 4.1

Clustering ExtremeZ-IP 4.1 Clustering ExtremeZ-IP 4.1 Installing and Configuring ExtremeZ-IP 4.x on a Cluster Version: 1.3 Date: 10/11/05 Product Version: 4.1 Introduction This document provides instructions and background information

More information

HP CloudSystem Enterprise

HP CloudSystem Enterprise HP CloudSystem Enterprise F5 BIG-IP and Apache Load Balancing Reference Implementation Technical white paper Table of contents Introduction... 2 Background assumptions... 2 Overview... 2 Process steps...

More information

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2 Technical Note Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2 This technical note discusses using ESX Server hosts with an IBM System Storage SAN Volume Controller

More information

Cisco MDS 9000 Family Highlights: Storage Virtualization Series

Cisco MDS 9000 Family Highlights: Storage Virtualization Series Cisco MDS 9000 Family Highlights: Storage Virtualization Series Highlighted Feature: Cisco Data Mobility Manager Purpose The Cisco MDS 9000 Family Highlights series provides both business and technical

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

GlobalSCAPE DMZ Gateway, v1. User Guide

GlobalSCAPE DMZ Gateway, v1. User Guide GlobalSCAPE DMZ Gateway, v1 User Guide GlobalSCAPE, Inc. (GSB) Address: 4500 Lockhill-Selma Road, Suite 150 San Antonio, TX (USA) 78249 Sales: (210) 308-8267 Sales (Toll Free): (800) 290-5054 Technical

More information

High Availability Databases based on Oracle 10g RAC on Linux

High Availability Databases based on Oracle 10g RAC on Linux High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database

More information

Data Migration Service for isr6200

Data Migration Service for isr6200 Data Migration Service for isr6200 Planning Guide ISR654607-00 C Data Migration Service for isr6200 Planning Guide Information furnished in this manual is believed to be accurate and reliable. However,

More information

3PAR Fast RAID: High Performance Without Compromise

3PAR Fast RAID: High Performance Without Compromise 3PAR Fast RAID: High Performance Without Compromise Karl L. Swartz Document Abstract: 3PAR Fast RAID allows the 3PAR InServ Storage Server to deliver higher performance with less hardware, reducing storage

More information

HP Matrix Operating Environment 7.2 Recovery Management User Guide

HP Matrix Operating Environment 7.2 Recovery Management User Guide HP Matrix Operating Environment 7.2 Recovery Management User Guide Abstract The HP Matrix Operating Environment 7.2 Recovery Management User Guide contains information on installation, configuration, testing,

More information

Administering and Managing Log Shipping

Administering and Managing Log Shipping 26_0672329565_ch20.qxd 9/7/07 8:37 AM Page 721 CHAPTER 20 Administering and Managing Log Shipping Log shipping is one of four SQL Server 2005 high-availability alternatives. Other SQL Server 2005 high-availability

More information

Brocade Network Advisor High Availability Using Microsoft Cluster Service

Brocade Network Advisor High Availability Using Microsoft Cluster Service Brocade Network Advisor High Availability Using Microsoft Cluster Service This paper discusses how installing Brocade Network Advisor on a pair of Microsoft Cluster Service nodes provides automatic failover

More information

How To Backup Your Computer With A Remote Drive Client On A Pc Or Macbook Or Macintosh (For Macintosh) On A Macbook (For Pc Or Ipa) On An Uniden (For Ipa Or Mac Macbook) On

How To Backup Your Computer With A Remote Drive Client On A Pc Or Macbook Or Macintosh (For Macintosh) On A Macbook (For Pc Or Ipa) On An Uniden (For Ipa Or Mac Macbook) On Remote Drive PC Client software User Guide -Page 1 of 27- PRIVACY, SECURITY AND PROPRIETARY RIGHTS NOTICE: The Remote Drive PC Client software is third party software that you can use to upload your files

More information

Availability Guide for Deploying SQL Server on VMware vsphere. August 2009

Availability Guide for Deploying SQL Server on VMware vsphere. August 2009 Availability Guide for Deploying SQL Server on VMware vsphere August 2009 Contents Introduction...1 SQL Server 2008 with vsphere and VMware HA/DRS...2 Log Shipping Availability Option...4 Database Mirroring...

More information

XStream Remote Control: Configuring DCOM Connectivity

XStream Remote Control: Configuring DCOM Connectivity XStream Remote Control: Configuring DCOM Connectivity APPLICATION BRIEF March 2009 Summary The application running the graphical user interface of LeCroy Windows-based oscilloscopes is a COM Automation

More information

Violin Memory Arrays With IBM System Storage SAN Volume Control

Violin Memory Arrays With IBM System Storage SAN Volume Control Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This

More information

Installation Guide July 2009

Installation Guide July 2009 July 2009 About this guide Edition notice This edition applies to Version 4.0 of the Pivot3 RAIGE Operating System and to any subsequent releases until otherwise indicated in new editions. Notification

More information

Direct Storage Access Using NetApp SnapDrive. Installation & Administration Guide

Direct Storage Access Using NetApp SnapDrive. Installation & Administration Guide Direct Storage Access Using NetApp SnapDrive Installation & Administration Guide SnapDrive overview... 3 What SnapDrive does... 3 What SnapDrive does not do... 3 Recommendations for using SnapDrive...

More information

Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide

Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

Fibre Channel HBA and VM Migration

Fibre Channel HBA and VM Migration Fibre Channel HBA and VM Migration Guide for Hyper-V and System Center VMM2008 FC0054605-00 A Fibre Channel HBA and VM Migration Guide for Hyper-V and System Center VMM2008 S Information furnished in this

More information

5-Bay Raid Sub-System Smart Removable 3.5" SATA Multiple Bay Data Storage Device User's Manual

5-Bay Raid Sub-System Smart Removable 3.5 SATA Multiple Bay Data Storage Device User's Manual 5-Bay Raid Sub-System Smart Removable 3.5" SATA Multiple Bay Data Storage Device User's Manual www.vipower.com Table of Contents 1. How the SteelVine (VPMP-75511R/VPMA-75511R) Operates... 1 1-1 SteelVine

More information

HP StorageWorks Modular Smart Array 1000 Small Business SAN Kit Hardware and Software Demonstration

HP StorageWorks Modular Smart Array 1000 Small Business SAN Kit Hardware and Software Demonstration Presenter Name/Title: Frank Arrazate, Engineering Project Manager Hardware Installation Hi, my name is Frank Arrazate. I am with Hewlett Packard Welcome to the hardware and software installation session

More information

Team Foundation Server 2012 Installation Guide

Team Foundation Server 2012 Installation Guide Team Foundation Server 2012 Installation Guide Page 1 of 143 Team Foundation Server 2012 Installation Guide Benjamin Day benday@benday.com v1.0.0 November 15, 2012 Team Foundation Server 2012 Installation

More information

Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family

Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Reference Architecture Guide By Rick Andersen April 2009 Summary Increasingly, organizations are turning

More information

istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering

istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering Tuesday, Feb 21 st, 2012 KernSafe Technologies, Inc. www.kernsafe.com Copyright KernSafe Technologies 2006-2012.

More information

HP ProLiant Cluster for MSA1000 for Small Business... 2. Hardware Cabling Scheme... 3. Introduction... 3. Software and Hardware Requirements...

HP ProLiant Cluster for MSA1000 for Small Business... 2. Hardware Cabling Scheme... 3. Introduction... 3. Software and Hardware Requirements... Installation Checklist HP ProLiant Cluster for HP StorageWorks Modular Smart Array1000 for Small Business using Microsoft Windows Server 2003 Enterprise Edition November 2004 Table of Contents HP ProLiant

More information

Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide

Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

WhatsUp Gold v16.3 Installation and Configuration Guide

WhatsUp Gold v16.3 Installation and Configuration Guide WhatsUp Gold v16.3 Installation and Configuration Guide Contents Installing and Configuring WhatsUp Gold using WhatsUp Setup Installation Overview... 1 Overview... 1 Security considerations... 2 Standard

More information

safend a w a v e s y s t e m s c o m p a n y

safend a w a v e s y s t e m s c o m p a n y safend a w a v e s y s t e m s c o m p a n y SAFEND Data Protection Suite Installation Guide Version 3.4.5 Important Notice This guide is delivered subject to the following conditions and restrictions:

More information

EMC VIPR SRM: VAPP BACKUP AND RESTORE USING EMC NETWORKER

EMC VIPR SRM: VAPP BACKUP AND RESTORE USING EMC NETWORKER EMC VIPR SRM: VAPP BACKUP AND RESTORE USING EMC NETWORKER ABSTRACT This white paper provides a working example of how to back up and restore an EMC ViPR SRM vapp using EMC NetWorker. October 2015 WHITE

More information

HP StorageWorks Automated Storage Manager User Guide

HP StorageWorks Automated Storage Manager User Guide HP StorageWorks Automated Storage Manager User Guide Part Number: 5697 0422 First edition: June 2010 Legal and notice information Copyright 2010, 2010 Hewlett-Packard Development Company, L.P. Confidential

More information

istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster

istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster Tuesday, December 26, 2013 KernSafe Technologies, Inc www.kernsafe.com Copyright KernSafe Technologies 2006-2013.All right reserved.

More information

EMC Invista: The Easy to Use Storage Manager

EMC Invista: The Easy to Use Storage Manager EMC s Invista SAN Virtualization System Tested Feb. 2006 Page 1 of 13 EMC Invista: The Easy to Use Storage Manager Invista delivers centrally managed LUN Virtualization, Data Mobility, and Copy Services

More information

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) IntelliMagic, Inc. 558 Silicon Drive Ste 101 Southlake, Texas 76092 USA Tel: 214-432-7920

More information

Installation and Setup: Setup Wizard Account Information

Installation and Setup: Setup Wizard Account Information Installation and Setup: Setup Wizard Account Information Once the My Secure Backup software has been installed on the end-user machine, the first step in the installation wizard is to configure their account

More information

Panorama High Availability

Panorama High Availability Panorama High Availability Palo Alto Networks Panorama Administrator s Guide Version 6.0 Contact Information Corporate Headquarters: Palo Alto Networks 4401 Great America Parkway Santa Clara, CA 95054

More information

Fibre Channel NPIV Storage Networking for Windows Server 2008 R2 Hyper-V and System Center VMM2008 R2

Fibre Channel NPIV Storage Networking for Windows Server 2008 R2 Hyper-V and System Center VMM2008 R2 FC0054608-00 A Fibre Channel NPIV Storage Networking for Windows Server 2008 R2 Hyper-V and System Center VMM2008 R2 Usage Scenarios and Best Practices Guide FC0054608-00 A Fibre Channel NPIV Storage Networking

More information

SATA RAID Function (Only for chipset Sil3132 used) User s Manual

SATA RAID Function (Only for chipset Sil3132 used) User s Manual SATA RAID Function (Only for chipset Sil3132 used) User s Manual 12ME-SI3132-001 Table of Contents 1 WELCOME...4 1.1 SATARAID5 FEATURES...4 2 AN INTRODUCTION TO RAID...5 2.1 DISK STRIPING (RAID 0)...5

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information

IBM Endpoint Manager Version 9.1. Patch Management for Red Hat Enterprise Linux User's Guide

IBM Endpoint Manager Version 9.1. Patch Management for Red Hat Enterprise Linux User's Guide IBM Endpoint Manager Version 9.1 Patch Management for Red Hat Enterprise Linux User's Guide IBM Endpoint Manager Version 9.1 Patch Management for Red Hat Enterprise Linux User's Guide Note Before using

More information

User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream

User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream User Manual Onsight Management Suite Version 5.1 Another Innovation by Librestream Doc #: 400075-06 May 2012 Information in this document is subject to change without notice. Reproduction in any manner

More information

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead

More information

HP Array Configuration Utility User Guide

HP Array Configuration Utility User Guide HP Array Configuration Utility User Guide January 2006 (First Edition) Part Number 416146-001 Copyright 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change

More information

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform 1 Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform Implementation Guide By Sean Siegmund June 2011 Feedback Hitachi Data Systems welcomes your feedback.

More information

FactoryTalk View Site Edition V5.0 (CPR9) Server Redundancy Guidelines

FactoryTalk View Site Edition V5.0 (CPR9) Server Redundancy Guidelines FactoryTalk View Site Edition V5.0 (CPR9) Server Redundancy Guidelines This page left intentionally blank. FTView SE 5.0 (CPR9) Server Redundancy Guidelines.doc 8/19/2008 Page 2 of 27 Table of Contents

More information

HP LeftHand SAN Solutions

HP LeftHand SAN Solutions HP LeftHand SAN Solutions Support Document Applications Notes Best Practices for Using SolarWinds' ORION to Monitor SANiQ Performance Legal Notices Warranty The only warranties for HP products and services

More information

Setup for Microsoft Cluster Service ESX Server 3.0.1 and VirtualCenter 2.0.1

Setup for Microsoft Cluster Service ESX Server 3.0.1 and VirtualCenter 2.0.1 ESX Server 3.0.1 and VirtualCenter 2.0.1 Setup for Microsoft Cluster Service Revision: 20060818 Item: XXX-ENG-QNNN-NNN You can find the most up-to-date technical documentation on our Web site at http://www.vmware.com/support/

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

Implementing disaster recovery solutions with IBM Storwize V7000 and VMware Site Recovery Manager

Implementing disaster recovery solutions with IBM Storwize V7000 and VMware Site Recovery Manager Implementing disaster recovery solutions with IBM Storwize V7000 and VMware Site Recovery Manager A step-by-step guide IBM Systems and Technology Group ISV Enablement January 2011 Table of contents Abstract...

More information

Auditing manual. Archive Manager. Publication Date: November, 2015

Auditing manual. Archive Manager. Publication Date: November, 2015 Archive Manager Publication Date: November, 2015 All Rights Reserved. This software is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this software,

More information

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Dell Compellent Solution Guide Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date Description 1/4/2013

More information

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available Phone: (603)883-7979 sales@cepoint.com Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous

More information

AXIS Camera Station Quick Installation Guide

AXIS Camera Station Quick Installation Guide AXIS Camera Station Quick Installation Guide Copyright Axis Communications AB April 2005 Rev. 3.5 Part Number 23997 1 Table of Contents Regulatory Information.................................. 3 AXIS Camera

More information

SanDisk ION Accelerator High Availability

SanDisk ION Accelerator High Availability WHITE PAPER SanDisk ION Accelerator High Availability 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Introduction 3 Basics of SanDisk ION Accelerator High Availability 3 ALUA Multipathing

More information

Legal Notes. Regarding Trademarks. 2012 KYOCERA Document Solutions Inc.

Legal Notes. Regarding Trademarks. 2012 KYOCERA Document Solutions Inc. Legal Notes Unauthorized reproduction of all or part of this guide is prohibited. The information in this guide is subject to change without notice. We cannot be held liable for any problems arising from

More information

Network Monitoring. SAN Discovery and Topology Mapping. Device Discovery. Send documentation comments to mdsfeedback-doc@cisco.

Network Monitoring. SAN Discovery and Topology Mapping. Device Discovery. Send documentation comments to mdsfeedback-doc@cisco. CHAPTER 57 The primary purpose of Fabric Manager is to manage the network. In particular, SAN discovery and network monitoring are two of its key network management capabilities. This chapter contains

More information

Drobo How-To Guide. Cloud Storage Using Amazon Storage Gateway with Drobo iscsi SAN

Drobo How-To Guide. Cloud Storage Using Amazon Storage Gateway with Drobo iscsi SAN The Amazon Web Services (AWS) Storage Gateway uses an on-premises virtual appliance to replicate a portion of your local Drobo iscsi SAN (Drobo B1200i, left below, and Drobo B800i, right below) to cloudbased

More information

VT Technology Management Utilities for Hyper-V (vtutilities)

VT Technology Management Utilities for Hyper-V (vtutilities) VT Technology Management Utilities for Hyper-V (vtutilities) vtutilities provide a local graphical user interface (GUI) to manage Hyper-V. Hyper-V is supported on Windows Server 2008 R2 and Windows Server

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until

More information

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled Getting Started with Hyper-V and the Scale Computing Cluster Scale Computing 5225 Exploration Drive Indianapolis, IN, 46241 Contents Contents CHAPTER 1 Introduction to Hyper-V: BEFORE YOU START. vii Revision

More information

Step-by-Step Guide for Windows Deployment Services in Windows Server 2008 to be used as an internal resource only

Step-by-Step Guide for Windows Deployment Services in Windows Server 2008 to be used as an internal resource only Windows Deployment Services is the updated and redesigned version of Remote Installation Services (RIS). Windows Deployment Services enables you to deploy Windows operating systems over the network, which

More information

HP ProLiant DL380 G5 High Availability Storage Server

HP ProLiant DL380 G5 High Availability Storage Server HP ProLiant DL380 G5 High Availability Storage Server installation instructions *5697-7748* Part number: 5697 7748 First edition: November 2008 Legal and notice information Copyright 1999, 2008 Hewlett-Packard

More information

ILTA 2013 - HAND 6B. Upgrading and Deploying. Windows Server 2012. In the Legal Environment

ILTA 2013 - HAND 6B. Upgrading and Deploying. Windows Server 2012. In the Legal Environment ILTA 2013 - HAND 6B Upgrading and Deploying Windows Server 2012 In the Legal Environment Table of Contents Purpose of This Lab... 3 Lab Environment... 3 Presenter... 3 Exercise 1 Add Roles and Features...

More information

Yamaha Audio Network Monitor User Guide

Yamaha Audio Network Monitor User Guide Yamaha Audio Network Monitor User Guide Note The software and this document are the exclusive copyrights of Yamaha Corporation. Copying or modifying the software or reproduction of this document, by any

More information

How to protect, restore and recover SQL 2005 and SQL 2008 Databases

How to protect, restore and recover SQL 2005 and SQL 2008 Databases How to protect, restore and recover SQL 2005 and SQL 2008 Databases Introduction This document discusses steps to set up SQL Server Protection Plans and restore protected databases using our software.

More information

RAID Utility User Guide. Instructions for setting up RAID volumes on a computer with a Mac Pro RAID Card or Xserve RAID Card

RAID Utility User Guide. Instructions for setting up RAID volumes on a computer with a Mac Pro RAID Card or Xserve RAID Card RAID Utility User Guide Instructions for setting up RAID volumes on a computer with a Mac Pro RAID Card or Xserve RAID Card Contents 3 RAID Utility User Guide 3 The RAID Utility Window 4 Running RAID Utility

More information

EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01

EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01 EMC ViPR Controller Version 2.4 User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published November,

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

HP 3PAR StoreServ Storage and VMware vsphere 5 best practices

HP 3PAR StoreServ Storage and VMware vsphere 5 best practices Technical white paper HP 3PAR StoreServ Storage and VMware vsphere 5 best practices Table of contents Executive summary... 3 Configuration... 4 Fibre Channel... 4 Multi-pathing considerations... 7 HP 3PAR

More information

Pharos Uniprint 8.4. Maintenance Guide. Document Version: UP84-Maintenance-1.0. Distribution Date: July 2013

Pharos Uniprint 8.4. Maintenance Guide. Document Version: UP84-Maintenance-1.0. Distribution Date: July 2013 Pharos Uniprint 8.4 Maintenance Guide Document Version: UP84-Maintenance-1.0 Distribution Date: July 2013 Pharos Systems International Suite 310, 80 Linden Oaks Rochester, New York 14625 Phone: 1-585-939-7000

More information

PowerPanel Business Edition Installation Guide

PowerPanel Business Edition Installation Guide PowerPanel Business Edition Installation Guide For Automatic Transfer Switch Rev. 5 2015/12/2 Table of Contents Introduction... 3 Hardware Installation... 3 Install PowerPanel Business Edition Software...

More information

Monitoring the Network

Monitoring the Network CHAPTER 8 This chapter describes how the DCNM-SAN manages the network. In particular, SAN discovery and network monitoring are two of its key network management capabilities. This chapter contains the

More information

FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection-

FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection- FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection- (iscsi) for Linux This page is intentionally left blank. Preface This manual briefly explains the operations that need to be performed

More information

Support Document: Microsoft SQL Server - LiveVault 7.6X

Support Document: Microsoft SQL Server - LiveVault 7.6X Contents Preparing to create a Microsoft SQL backup policy... 2 Adjusting the SQL max worker threads option... 2 Preparing for Log truncation... 3 Best Practices... 3 Microsoft SQL Server 2005, 2008, or

More information

Scalable NAS for Oracle: Gateway to the (NFS) future

Scalable NAS for Oracle: Gateway to the (NFS) future Scalable NAS for Oracle: Gateway to the (NFS) future Dr. Draško Tomić ESS technical consultant, HP EEM 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change

More information

capacity management for StorageWorks NAS servers

capacity management for StorageWorks NAS servers application notes hp OpenView capacity management for StorageWorks NAS servers First Edition (February 2004) Part Number: AA-RV1BA-TE This document describes how to use HP OpenView Storage Area Manager

More information