Designing high-availability solutions with HP and HP Integrity Virtual Machines Executive Summary... 2 Integrity VM Overview... 2 for High Availability... 4 Integration with HP Integrity VM... 4 Designing Clusters with HP Integrity VM... 4 Integrity Virtual Machines as s... 7 Storage Considerations... 7 Example Use Cases... 9 Usage Considerations... 9 Integrity Virtual Machines as Nodes... 10 Storage Considerations... 15 Dynamic Memory Allocation... 17 Example Use Cases... 18 Usage Considerations... 19 Additional Considerations for VM as Node Configurations... 20 Application Monitoring... 21 Integrity VM Operating System and Application Support... 23 HP Integrity VM High Availability Architecture Considerations... 24 Networks... 24 Storage Protection... 25 Performance... 25 Software Upgrades... 26 Selecting the Best Model to Meet Your Requirements... 26 Summary Putting it All Together... 28 Glossary... 28 For more information... 29
Executive Summary HP Integrity Virtual Machines (Integrity VM) is a virtualization product in the HP Virtual Server Environment (VSE) to help customers maximize server resource utilization and reduce total cost of ownership by allowing individual operating system environments to share CPU and I/O resources. Mission-critical and business-critical applications often require high availability to maintain service level objectives, and the integration of Integrity VM with can provide this capability for production, development and test environments. With this new technology, it is important to understand its capabilities and how it can be implemented to meet customer high availability requirements. The purpose of this white paper is to: 1. Introduce the reader to the concepts of HP Integrity Virtual Machines. 2. Describe implementation models that can be used effectively to provide high availability in Integrity VM implementations. 3. Help customers decide which implementation model would be best suited to meet their consolidation and high availability needs. Integrity VM Overview HP Integrity Virtual Machines is a software partitioning and virtualization technology available on HP Integrity servers, including blade systems, that enables multiple operating system instances to run on a single server or npar (hard partition) while allowing these instances to share CPU, memory and I/O resources. Integrity VM enables the creation of virtual machines (VMs), which are virtual hardware systems implemented in software that represent a collection of virtual hardware devices provided by a computer s actual physical hardware. Each virtual machine is a complete system environment containing virtual implementations of CPU, memory, disk, I/O and other system resources capable of running an operating system called a guest OS. For the Integrity VM B.04.00 release, the guest OS can be: HP-UX 11i v2 (0609 or later) HP-UX 11i v3 0703, 0709, 0803, and 0809 Windows 2003 Server (Enterprise or Datacenter edition) SP2 Red Hat Linux Enterprise Edition Advanced Server Release 4 Update 5 SUSE Linux Enterprise Server (SLES) for HP Integrity servers SLES 10 Update 1 and Update 2 The VM can run any application supported by the guest OS and behave as it normally would on a single physical Integrity Server without the need for recompilation or other changes. Integrity VM is designed to provide binary application compatibility between native HP-UX 11i v2 and 11i v3 Integrity server applications and the same applications running within virtual machines as long as the applications access devices that are virtualized by Integrity VM. VMs require Integrity VM host software to manage the hardware resources such as CPU, memory, and I/O on the physical system being virtualized and shared between multiple VMs. The VM host software runs on a standard HP-UX 11i v2 or 11i v3 operating system depending on the Integrity VM release 1, which can be managed by a variety of HP-UX and VSE tools such as HP Systems Insight Manager (SIM), System Management Homepage (SMH), and Global Workload Manager (gwlm). Figure 1 shows the hardware and software components used by Integrity VM, from the base 1 Only HP-UX 11i v2 AR0712 and later 11i v2 releases are supported on Integrity VM A.03.50 hosts; 11i v3 support for Integrity VM B.04.00 hosts only. 2
hardware on the host system, through the Integrity VM software, and HP-UX operating system to the virtual machines running on the VM host. The value of Integrity VM is found in its ability to help consolidate application workloads from multiple servers to reduce the total cost of ownership for a server environment. This cost reduction is realized by reducing the total number of servers required to run applications through consolidation while improving overall system utilization, providing faster server provisioning and increasing the flexibility of system configurations. For example, Integrity VM allows multiple VMs with unique application and OS requirements to share the same physical server, which is also shown in Figure 1. Figure 1: Integrity VM Components app1 app2 app1 app1 HP-UX 11i v2 Linux RH4 Windows app1 app2 app1 app1 HP-UX 11i v3 Linux SLES 10 Windows HP Integrity VM Host Integrity Hardware Integrity VM is integrated with other VSE components for Workload Management and HP Instant Capacity, including: HP Workload Manager (WLM) HP Global Workload Manager (gwlm) Temporary Instant Capacity (TiCAP) Global Instant Capacity (GiCAP) These VSE components can be implemented within the VM host to allocate system resources asneeded for the VMs running on the host. Processor Sets (PSETS), HP Process Resource Manager (PRM), WLM and gwlm are supported running inside a VM guest; however the use of TiCAP and GiCAP inside a VM guest is not supported. 3
for High Availability HP creates high availability clusters using a networked grouping of HP Integrity and HP 9000 servers. These servers are typically configured with redundant hardware and software components to eliminate single points of failure (SPOFs). is designed to keep application services running in spite of hardware (e.g., System Processing Unit, disk, LAN, etc.) or software (e.g., Operating System, user application, etc.) failures. In the event of a hardware or software failure, and other high availability subsystems coordinate operational transfer between components. uses packages to group application services (e.g., individual HP-UX processes) together, and are typically configured to run on several nodes in the cluster one at a time. In the event of a service, node, network, or other monitored package resource failure on the node where the package is running, can automatically transfer control of the package to another node in the cluster, thus allowing the services to remain available with minimal interruption. Integration with HP Integrity VM The HP Virtual Server Environment encompasses a number of fully integrated and complementary components that are designed to enhance the functionality and flexibility of server environments. and Integrity VM are both VSE components that provide availability and partitioning capabilities for the HP virtualization strategy. Using together with Integrity VM provides the ability to: Migrate workloads using the flexibility of Integrity VM and the control of Fail over Integrity VM environments to other cluster node configurations (e.g., npars, servers Note: vpars are not supported with HPVM) Meet the consolidation and high availability requirements of many business-critical customers Customers purchase physical systems to improve the isolation of mission-critical and business critical applications for: Providing better security for the applications Supporting different operating systems, software versions and application environments The partitioning capabilities of VSE can reduce, or consolidate the number of physical systems required to support an applications environment, which in turn reduces customer costs as demonstrated by demand for npars and vpars today. Integrity VM offers finer-grained partitioning of system resources as compared to npars and vpars, and can support multiple operating systems on a single physical system. While Integrity VM facilitates application consolidation and isolation, the integration of Integrity VM with provides the added benefit of high availability protection for the applications running under Integrity VM. Designing Clusters with HP Integrity VM An interesting duality exists with VMs in that they function both as nodes running an operating system while at the same being instances running on a VM host. Since is designed to run on nodes and manage applications encapsulated within packages, integrating with Integrity VM can yield several different configuration possibilities. For example, it is possible to run on physical nodes and manage VMs as applications (i.e., VMs as packages), or run on the VMs to manage the applications running within the VMs (i.e., VMs as nodes). 4
There are many possible ways to configure Integrity VM with. The following section describes these different configurations in the form of implementation models to help characterize their differences and highlight their suitability for use in various system environments. These implementation models are currently supported with several different versions of Integrity VM, 2, and the Storage Management Suite (SMS) as shown in table 1: Table 1: Integrity VM / Support Matrix Integrity VM Release (11i v2 VM host support only) A.01.20 A.02.00 A.03.00 A.03.50 VMs as s ( running on the HP-UX VM host) release - A.11.16 - A.11.17 - A.11.17 w/ SMS A.01.00 (1) release - A.11.16 - A.11.17 - A.11.17 w/ SMS A.01.00 (1) release - A.11.16 - A.11.17 - A.11.17 w/ SMS A.01.00 (1) - A.11.18 - A.11.18 w/ SMS A.01.01 (1) release - A.11.16 - A.11.17 - A.11.17 w/ SMS A.01.00 (1) - A.11.18 - A.11.18 w/ SMS A.01.01 (1) - A.11.18 w/ SMS A.02.00 (1) VMs as HP-UX Nodes ( running on the HP-UX VM guest) release - Not supported release - A.11.16 - A.11.17 - A.11.17 w/ SMS A.01.00 (1) release - A.11.16 - A.11.17 - A.11.17 w/ SMS A.01.00 (1) - A.11.17.01 (11i v3) - A.11.18 (11i v2, 11i v3) - A.11.18 w/ SMS A.01.01 (11i v2) (1) release - A.11.16 - A.11.17 - A.11.17 w/ SMS A.01.00 (1,2) - A.11.17.01 (11i v3) - A.11.18 (11i v2, 11i v3) - A.11.18 w/ SMS A.01.01 (11i v2) (1,2) - A.11.18 w/ SMS A.02.00 (11i v2) (1,2) - A.11.18 w/ SMS A.02.00 (11i v3) (1,2) VMs as Linux Nodes ( running on the Linux VM guest) release - Not supported release - Not supported release - A.11.18 (Linux Red Hat Release 4 Update 4/5) release - A.11.18 (Linux Red Hat Release 4 Update 4/5, SLES10 Update 1) 2 See the relevant Integrity VM release notes for required patches to support specific revisions 5
Table 1: Integrity VM / Support Matrix (cont.) Integrity VM Release (11i v3 VM host support only) VMs as s ( running on the HP-UX VM host) B.04.00 release - A.11.18 - A.11.18 w/ SMS A.02.00 (1) VMs as HP-UX Nodes ( running on the HP-UX VM guest) VMs as Linux Nodes ( running on the Linux VM guest) release - A.11.18 (11i v2, 11i v3) - A.11.18 w/ SMS A.02.00 (11i v2) (1,2) - A.11.18 w/ SMS A.02.00 (11i v3) (1,2) release - A.11.18 (Linux Red Hat Release 4 Update 5, SLES10 Update 1/2) The following notes refer to the release versions listed in Table 1: 1) Since Oracle RAC is currently not supported in VM guests, and Oracle single-instance and RAC is not recommended for use on VM hosts, support for the A.01.00 / A.01.01 / A.02.00 Storage Management Suite (SMS) with HP Integrity Virtual Machines is limited to the following bundles: VM Hosts T2771xx HP Storage Management T2772xx HP Storage Management Premium T2775xx HP Cluster File System VM Guests T2771xx HP Storage Management T2772xx HP Storage Management Premium T2773xx HP Storage Management for Oracle T2774xx HP Storage Management for Oracle Premium T2775xx HP Cluster File System T2776xx HP Cluster File System for Oracle 2) The following A.01.00 / A.01.01 / A.02.00 Storage Management Suite (SMS) capabilities are not supported in HP Integrity VM guests: Cross Platform Data Sharing (CDS) disk format (VxVM and CVM disk groups must be created using the cds=off option) Enclosure Based Names (EBN) 6
Each implementation model described in this white paper includes example use cases and a list of considerations for using the models in particular consolidation and high availability scenarios. The cluster nodes shown in these models can be standalone servers or npars. It is also possible for packages used in VM as node configurations to failover to a vpar configured as a node; however vpars are not supported for use as VM hosts. Integrity Virtual Machines as s Figure 2 represents a basic Virtual Machine as a (or, VMs as s) model configuration as supported with the Integrity VM A.01.20 and later releases. In this configuration, a cluster is formed using VM host systems as nodes in the cluster, and the VMs are encapsulated within packages. In other words, is providing high availability for the VMs that are used to run the applications. Figure 2: Virtual Machine as a Model Failover VM guest Cluster VM host VM host The VM guest within the package is protected against VM host system hardware or software failures, in addition to VM failures. Depending on the type of failure, the package containing the VM guest can be either restarted on the same node or failed over to another VM host within the cluster. Since can only control the startup of the VM guest, any applications within the guest must be started through the guest s defined initialization sequence, or by using the guest s native operating system commands. Storage Considerations VM as configurations supports all VM guests backing store types, including: Whole Disks Disk Partitions LVM Logical Volumes 3 VxVM Logical Volumes Files on any of the storage types listed above, including files on a Cluster File System (CFS) The VM guest backing stores must reside on shared storage so that it is accessible by all VM hosts in the cluster to allow failover of the VM guests. Configuring shared storage for VM hosts is accomplished in a similar manner as a standard cluster configuration, with the only 3 SLVM is not supported. 7
h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y RE AD Y AL ARM MESS AGE P US H h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y RE AD Y AL ARM MESS AGE P U S H h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y Run Attn. Fault Remote h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y Power difference being the shared storage must be defined as VM storage devices that are exclusively used by a specific VM guest. The Integrity VM toolkit script, hpvmsg_package, is used to create package configuration file and control script templates for the VMs to be protected by packages. This script is designed to determine the cluster shared backing store and application data storage used by the VM guest to be packaged and will add the appropriate logical volume and mount point entries into the package control script for guest failover. Details on the procedure for configuring VM guests as packages can be found in the HP Integrity Virtual Machines Version 4.0 Installation, Configuration, and Administration manual. Information on configuring shared storage for clusters is available in the Managing manual. An example of a VM as package storage configuration is shown in Figure 3. Figure 3: VM as Storage Configuration VM in Failover VM Guest VM Host hp server rx5670 VM Host VM Backing Store Application Data Volume Group (recommended) or supported VM store Volume Group or CFS Note: Once a VM guest is configured as a package, it is no longer possible to use Integrity VM commands (e.g.,hpvmsart, hpvmstop) to start or stop the VM guest directly from the VM host. At this point, commands are used to control the startup and shutdown of the packaged guest. It is possible to use the hpvmconsole command from the VM host to connect directly to the VM guest and perform a startup or shutdown of the VM; however will be monitoring the 8
status of the VM and will react to this change in status as either a package failure or in an unpredictable way. There are several points to consider prior to using the hpvmsg_package script: It is highly recommended that the VM guest to be packaged should be started first on each VM host node in the cluster to ensure that its backing stores are accessible from all cluster nodes. The script expects the VM guest to be running to identify all required backing stores and their mount points for the package control file template that it creates. The script for Integrity VM B.04.00 has been enhanced to configure either A.11.17 legacy packages or A.11.18 modular package formats. After executing the script, it is important to review the generated package control file templates to ensure the correct logical volumes and mount points have been identified and are included in the template. Example Use Cases The following are several example scenarios that can benefit from the integration with Integrity VM: HA for Development and Test Environments: Traditional development and test environments typically are not highly available. Using Integrity VM with provides HA for the VMs created to consolidate the applications used in these environments. The VMs can be failed over to another node in the event of a VM host failure. Maintenance and Load Balancing: VM guests running applications used for development or production activities can be easily moved as packages to other nodes within a cluster. This is helpful when planned system maintenance is required or to balance system loads during peak operating times. HA for Non-Clustered Production Applications: Any production applications that were never considered critical enough to run in a clustered environment, or applications and operating systems that do not work with will gain improved availability without modification by simply moving onto VMs. can restart the VM on another VM host in the event of a VM host or VM guest failure. Usage Considerations When implementing VMs as s, several key points should be considered: Integrity VM host nodes should only run VM guests and not other user applications. The Integrity VM Fair Share Scheduler provides control of entitlements (i.e., the percentage or number of CPU clock cycles that are guaranteed to a VM) for VMs running on the VM host. The scheduler cannot monitor or respond to VM processing requirements with other workloads running on the VM host; therefore it is recommended to only run Integrity VM host software and VM guests on the cluster nodes. This rule also applies to non- configurations. VM guests within packages can only failover to other cluster nodes that are running Integrity VM host software. VM guests can only run on pre-configured VM hosts. For a successful VM guest package failover, the failover node must be a VM host. Having prerequisite software installed on a failover node to support the execution of any application within a package is a basic requirement for cluster configurations. can only monitor the status of the VM guest running as a package not the applications running within the VM. With a VM guest running as a package, can only monitor the status of the VM guest executing as an HP-UX process on the VM host and not the applications running within the guest. In this case, any VM guest package failovers would be based on a failure of either the VM guest or its VM host node, and not due to any 9
application-specific failures. HP provides a HP for Integrity Virtual Machines Toolkit that includes package configuration files and utility scripts to help with the creation of VM guest s and monitoring of the VM guests within the packages. The A.01.20 version of the toolkit is available for download at http://software.hp.com (search for for Integrity Virtual Machines Toolkit ), and later toolkit versions are provided with their corresponding Integrity VM products. Techniques for monitoring applications within VM guests are described in the Application Monitoring section of this white paper. Failover of VM guest applications is slower than clustered applications. Applications designed for clustered environments and running on a host node can be operational more quickly after a failover compared to an application running in a VM guest because they do not have the added time required to start (i.e., boot-up) the VM before the application can be restarted. In this case, the VM guest and its applications are being failed over together, so the total time required for an application to be available for use consists of the VM start time, guest OS boot-up time and the application start-up time. However, the VM start time and guest OS boot-up time for a VM guest is significantly faster than the boot-up time of a physical node. If failover time is a primary concern, application recovery time measurements should be taken for both standard and VMs as s configurations to determine which configuration would be the best balance between your application consolidation and availability requirements. LANs using Accelerated Virtual I/O (AVIO) must be configured using AVIO-supported host Physical Point Attachment (PPA) network devices. In VM as package configurations using AVIO network devices, the standby LANs must use PPA devices supported by AVIO to prevent a loss of network connectivity that will occur if using non-supported devices, even if the standby LAN link is up. Additional information on AVIO drivers can be found in the HP Integrity Virtual Machines Installation, Configuration, and Administration Version A.03.50 reference manual. With the benefit of using Integrity VM to consolidate applications from physical machines to VMs, the advantages of implementing VMs as s configurations include providing HA for: VM guests in case of a VM host failure Standalone and other applications not designed for use in clustered environments VMs as s configurations have the following limitations: It is difficult to monitor applications within the VM guest (custom monitoring must be developed and implemented) Application failover times will be slower for VM guests as compared to clustered applications due to the additional time required to startup the VM and boot-up the guest OS Integrity Virtual Machines as Nodes In the previously described VMs as s model, provided high availability for VM guests encapsulated within packages. In the Virtual Machines as Nodes (or, VMs as Nodes) model that is supported in the Integrity VM A.02.00 and later releases, HP-UX VM guests are used as actual cluster nodes to provide the same HA failover capabilities found in traditional cluster configurations. Essentially, Integrity VM can be used to consolidate clusters on to VMs. VMs as HP-UX Node cluster configurations can span across: VMs and separate physical nodes or vpars VMs on separate VM hosts A combination of the above VMs on the same VM host ( cluster in a box ) 10
Linux guests running as nodes are supported starting with Integrity VM A.03.00 and for Linux version A.11.18 (with a required patch). Note: Linux clusters can only contain all physical Linux servers or all Linux VM guests as nodes in the cluster. Using physical Linux nodes with Linux VM guest nodes in the same cluster is not supported due to the differences in how physical I/O to shared storage is handled between a Linux guest using virtual I/O on an HP-UX host and a physical Linux server using a Linux I/O stack. Although there are many possible combinations of using VMs with physical nodes, the combination of HP-UX VMs as packages and HP-UX VMs as nodes on the same VM host is not supported at the present time. The following are several examples of VMs as nodes configurations that are currently supported. In Figure 4, a 2-node cluster is formed using a VM guest serving as one node and a physical system as the second node. Figure 4: cluster using a VM guest and Physical Node Failover VM guest Cluster VM host Physical Node In this configuration, provides high availability in the event of a VM guest or application failure. A failed application can either be restarted within the same VM guest or failed over to the physical node. Figure 5 is an example of using VM guests as nodes within a cluster. VM host software running on each physical node allows operation of the VM guests on the physical nodes. 11
Figure 5: cluster using VM guests as cluster nodes Failover VM guest 1 Cluster VM guest 2 VM Host 1 VM Host 2 As in the previous example, provides high availability in the event of a VM guest or application failure. However, in this configuration, a failed application can either be restarted within the same VM guest or failed over to another VM guest operating on a separate physical node. VM guests can be used in a cluster in a box configuration, as shown in Figure 6. In this example, two VM guests form a cluster operating within a single physical node. This configuration is similar to using vpars within a single physical system to form a cluster in a box. Note: Cluster in a box configurations are not recommended for mission critical applications since there is no electrical isolation between the VM guest nodes and the physical node itself hosting the VM guests, which creates a Single Point of Failure (SPOF). The VM host OS can also be considered a SPOF. 12
Figure 6: Cluster in a Box configuration using VM guests Serviceguar d VM guest 1 Cluster Failover VM guest 2 Physical Node In this configuration, a failed application can either be restarted within the same VM guest or failed over to another VM guest operating on the same physical node. Figures 6 and 7 are examples showing how Integrity VM can be used to consolidate nodes within clusters. In Figure 7, a single standby host is used to run two separate VM guests that are each part of two different clusters. The packages that are normally running on Primary Nodes 1 and 2 can failover to their corresponding VM guest failover nodes running on a single standby host configured to support the execution of multiple VM guests. Figure 7: cluster using VM guests as standby failover nodes Failovers Primary Node 1 VM guest 1 VM guest 2 Standby Host Primary Node 2 Cluster 1 Cluster 2 This configuration has the benefit of being able to consolidate standby systems that are normally found in typical single cluster configurations, thus providing cost savings by reducing the number of required standby failover systems and making more efficient use of system hardware. Additional cost savings through the efficient use of the VM standby host s memory resources for the VM guests can be achieved by using the dynamic memory allocation feature available starting with the Integrity VM A.03.00 release. Details of this functionality are described in the Dynamic Memory Allocation section of this white paper. 13
Figure 8 shows an example of physical node consolidation within a single cluster by using multiple VM guests running on a single physical node. This example could also be used in a single cluster N+1 configuration where a single VM guest serves as a standby node for the other nodes within a cluster. Figure 8: cluster node consolidation using VM guests Failovers VM guest 1 VM guest 2 Physical Node 1 VM Host Cluster Physical Node 2 In this example, a package on Physical Node 1 can failover to its VM guest 1 adoptive node running on a VM Host while a package on the other VM guest 2 node running on the VM Host can failover to its adoptive Physical Node 2. This configuration provides both cost savings by reducing the number of physical nodes required in the cluster and better system hardware utilization. This example also creates a configuration in which no more than half of the cluster members are hosted on one physical server and alleviates the problem of the cluster losing quorum due to one physical server failure (e.g., the VM host with 2 cluster nodes). It is also possible to consolidate multiple clusters using VM guests. Figure 9 shows an example of two 2-node clusters using 4 physical servers being consolidated onto 2 VM hosts. In this example, the number of physical servers currently in-use is reduced by 50%, which results in the benefit of conserving data center floor space and power usage. 14
Figure 9: multi-cluster consolidation using VM guests Physical Node Failover cluster 1 Cluster Physical Node VM Guest 1 Failover cluster 1 Cluster VM Guest 2 Failover cluster 2 Cluster Physical Node Physical Node Before consolidation VM Guest 1 VM Host Failover cluster 2 Cluster VM Guest 2 VM Host After consolidation Storage Considerations An important distinction between VM as package and node configurations is that VM as node configurations only support whole disk VM backing stores. One reason for this restriction is that it is not possible to set timeouts on logical volumes or file systems presented as backing stores to the VM guest, and any errors generated from these types of backing stores are not passed through the virtualization layers from the VM host to VM guest that would allow running in the VM to react to these conditions. Another reason relates to disk I/O performance and the speed at which I/O requests can be completed prior to a VM node failure, which can affect cluster reformation time (see the following Usage Considerations section for more information on handling outstanding I/O requests during a VM node failure). As for data used by applications protected by packages, it must reside on shared storage that is physically connected to all nodes in the cluster and can be placed in LVM or VxVM logical volumes, or on a cluster file system (CFS) that is accessible by the VM guest. The storage for the application data presented to the VM guest by the VM host must be whole disks so the logical volume and file system structures on this storage can be accessed by the other nodes in the cluster during a package failover. An example of a storage configuration used by an HP-UX VM as node configuration is shown in Figure 10a, and a Linux VM as node configuration in Figure 10b. 15
h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y hp S t o r a ge W o r ks x p1 2 0 0 0 d is k a r r ay R EA DY A LAR M MES SA GE P U S H REA DY ALAR M ME SSAG E P US H h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y hp S t o r a ge W o r ks x p1 2 0 0 0 d is k a r r ay hp S t o r a g e W o r k s x p 1 2 0 0 0 di s k a r r a y h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y hp S t o r a ge W o r ks x p1 2 0 0 0 d is k a r r ay h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y REA DY ALAR M ME SSAG E P US H RE AD Y AL ARM MESS AGE P U S H hp S t o r a ge W o r ks x p1 2 0 0 0 d is k a r r ay h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y hp S t o r a ge W o r ks x p1 2 0 0 0 d is k a r r ay h p S t o r a g e W o r k s x p 1 2 0 0 0 d i s k a r r a y Figure 10a: HP-UX VM as Node Storage Configuration Failover VM Guest Cluster VM Host Physical Node VM Backing store Whole Disk Application Data Volume Group or CFS Figure 10b: Linux VM as Node Storage Configuration Failover VM Guest VM Host VM Guest VM Host VM Backing store Whole Disk VM Backing store Whole Disk Application Data Volume Group The primary difference between the HP-UX and Linux VM as package storage configurations is that the Linux configuration can only use Linux VM nodes in the cluster, whereas the HP-UX configuration can use either HP-UX physical or VM cluster nodes. 16
Dynamic Memory Allocation A capability starting with the Integrity VM A.03.00 release is the ability to dynamically allocate memory used by an HP-UX VM guest. The intended use case for this feature is for VM hosts acting as consolidated standby servers using multiple VMs that can run packages when a failover occurs (an example of this configuration is shown in figure 7). In this configuration, the VM guests functioning as standby servers use a minimum amount of VM host memory resources until they are required to use additional memory to run a failover package. In this way, the VM host does not have to be configured with the full amount of memory required to support all failover packages at a single time. Note that the total memory currently allocated to all running VMs must be less than or equal to the physical memory on the VM host system minus the memory needed for the VM Host itself. The advantages to this configuration are cost savings through the reduction of hardware systems and memory traditionally used by individual standby servers. Figure 11 shows an example of how dynamic memory allocation is used in a failover situation. In step 1 of the figure, the VM is initially started with the maximum amount of memory required to run its HP-UX guest operating system and any failover application packages to help prevent memory fragmentation. Once started, the VM will then release an amount of its initially allocated memory to a comfortable minimum level that is required only to run the guest operating system. In the event of a package failover, the customer-defined run commands section of the package control script uses an hpvmmgmt command to dynamically allocate an amount of memory (in 64MB chunks) to a specified target value (in this example, the full amount of memory used at VM guest startup), as shown in step 2. The following is a customer-defined run command function example in legacy package format to perform this operation (Note that external_scripts available in the modular package format can also be used with version A.11.18 or above): function customer_defined_run_cmds { # ADD customer defined run commands to be executed prior to service start. : # Increase VM Guest dynamic memory to value allocated at VM start } /opt/hpvm/bin/hpvmmgmt -x ram_target=start test_return 51 At this point, the failover application now has sufficient memory available to run and can be started by the package control script, as shown in step 3. 17
Figure 11: Dynamic Memory Allocation in an HP-UX VM Guest Failover Target MAX 3 Target MAX Target 1 MIN 2 HP-UX VM Guest VM Host When the application package is failed back to its original node, an hpvmmgmt command in the customer-defined halt commands function of the package control script can be used to reduce the VM guest memory back to a comfortable minimum value, as shown below: function customer_defined_halt_cmds { # ADD customer defined halt commands to be executed after service is halted. : # Reduce VM Guest dynamic memory to comfortable minimum } /opt/hpvm/bin/hpvmmgmt -x ram_target=0 test_return 52 Note that changing the maximum memory value allocated for a VM requires the VM to be restarted with the new value. In addition, VM memory allocation may be slow or may not be able to reach the original boot size of the VM when VM host memory is tight or very fragmented. Example Use Cases Virtual Machines as Nodes configurations offer a variety of possible use cases for providing both application consolidation and high availability. The following are a list of several examples: Production Application Availability. Important applications can be protected by and moved to another VM guest or physical node in the event of a failure to minimize application downtime. This functionality is identical to a traditional cluster using standalone physical systems or hard partitions. Cluster Node Consolidation. Both primary and standby nodes in single and multiple clusters can be consolidated using VM guests on a single system to reduce hardware costs, data 18
center floor space, and power usage, in addition to improving system utilization while maintaining application isolation. For Cluster in a Box configurations using VM guests, potential use cases include: Cluster Testing with Minimal Hardware. A cluster can be easily setup for testing purposes using a Cluster in a Box configuration without requiring additional system hardware for cluster nodes. Reducing Data Center Floor Space and Power Usage while Improving Availability. Cluster in a Box configurations reduce data center floor space and power requirements while providing some level of high availability for the applications running within the VM guests serving as nodes on a single VM host server. However, the server is still a SPOF in these configurations. Usage Considerations Virtual Machines as Nodes configurations should be considered whenever there is a need for consolidating systems in clusters and the applications require full HA monitoring and failover functionality that is provided by. These configurations allow for a reduction in the total number of physical systems required for clusters by moving cluster nodes from individual physical systems to multiple VMs running on single systems. As with VMs as s configurations, Integrity VM host nodes should only run VM guests and not other user applications. This may adversely affect the allocation of system resources to the VM guests running on the VM hosts. Monitoring of applications within a VM as Node configuration is the same as in a traditional cluster because is running within the VM guest. No specialized monitoring agents are required as is the case for VM as configurations. failover times for VM as Node configurations will be somewhat similar to traditional cluster failovers because application packages are simply restarted on their pre-configured adoptive nodes. These configurations will not experience the additional VM boot time that is part of a VM as configurations where the failed over VM guest package must restart the VM guest on its adoptive node. Integrity VM has a default 5-second network polling interval that will increase failover time by several seconds. To insure the vswitches used by failover in less than 5 seconds, this polling interval can be set when using with Integrity VM to a recommended value of 2-seconds by modifying the HPVMNETINTVAL=n parameter in the file /etc/rc.config.d/hpvmconf. (Note the value n is an integer between 1 and 10 that specifies the number of seconds for the polling interval.) Cluster reformation time will be somewhat longer for VM as Node configurations (approximately 40-70 seconds longer compared to only a cluster consisting of physical nodes) to allow for all outstanding I/O requests from the VM guests through the VM host virtualization layer to complete before cluster activities can resume following a cluster reformation. uses an io_timeout_extension parameter that is set at cluster configuration time to extend the quiescence period of the cluster reformation based on whether a VM node is present in the cluster and the I/O timeout settings on the VM host. It is important to note that: The io_timeout_extension parameter is set internally by and is not configurable by the user; however its value can be viewed using the cmviewconf, cmviewcl v f commands, or can be found in the system log file 4. It is highly recommended to install the VM guest management software, especially on VM guests functioning as nodes, in order for to determine an optimal 4 System log file names are /var/adm/syslog/syslog.log on HP-UX systems and /var/log/messages on Linux systems. 19
io_timeout_extension value (otherwise, would assume the most conservative value of 70 seconds resulting in unnecessarily lengthening the cluster recovery time). Be aware that the online addition or removal of VM cluster nodes or changes to cluster membership parameters (available with the A.11.18 release) can affect the cluster quiescence period. In a failure scenario where the pending I/Os from a VM guest are not cleared within its extended quiescence time period, the Integrity VM software will perform a TOC (Transfer of Control, or CPU reset) on the VM host servicing the guest to ensure data integrity by terminating any outstanding I/O requests from the affected VM guest. When performing a cluster consolidation, as with any workload consolidation using Integrity VM, careful planning of the VM configuration is required to ensure proper performance of the VM guests by having a sufficient number of processors and available memory, in addition to storage and network I/O connections, to handle their workloads. Any initial performance problems with a VM guest can be compounded when application workloads are failed over to it by in response to a failure in one of the other cluster members. Cluster in a Box configurations should not be considered for running mission or business-critical applications as the physical VM host system is a Single Point of Failure (SPOF). If the physical system fails, the entire cluster will also fail. Integrity VM instances are not highly available in VMs as Nodes configurations. A failure of a VM guest is similar to a node failure in a cluster. It is the use of within the VM guest that provides high availability for the applications running in the VM. VMs as Nodes configurations do have a shortcoming in that the adoptive failover VMs must be executing and consuming some degree of VM host resources, which could potentially be used by other VMs that are not part of the cluster. The use of the dynamic memory allocation feature starting with the Integrity VM A.03.00 release should be considered to better manage adoptive VM node memory usage during application failovers. Additional Considerations for VM as Node Configurations clusters rely on a cluster daemon process called cmcld that determines cluster membership by sending heartbeat messages to other cmcld daemons on other nodes within the cluster. The cmcld daemon runs at a real time priority and is locked in memory. Along with handling the management of packages, cmcld also updates a safety timer within the kernel to detect kernel hangs, checks the health of networks on the system and performs local LAN failovers. Status information from cmcld is written to the node s system log file. In VMs as node configurations, there are some situations where VM guests defined with multiple vcpus, or a single vcpu with insufficient entitlement, can potentially experience cmcld run time delays under heavy processing load conditions. If the run time delay is longer than the configured cluster NODE_TIMEOUT value (i.e., the time after which a node may decide that the other cluster node has become unavailable), cmcld will trigger a cluster reformation just as if a node had failed. However, since no node has actually failed, the cluster will reform potentially with the same number of nodes it originally had before the cmcld run delay was reported depending on the length of the run delay. Other factors that may contribute to this situation include vcpu processing entitlement percentages and the number of vcpus assigned per VM as they relate to HP-UX kernel time slice processing. Cmcld run delays can be identified by the following warning reported in the system log file: [date/time VM name] cmcld [PID]: Warning: cmcld process was unable to run for the last <x.yz> seconds 20
It is highly recommended to install the Fair Share Scheduler patches (PHKL_33604 and PHKL_33605) on all VM A.02.00 hosts to minimize the possibility of encountering this problem and triggering false cluster reformations (Note: These patches are included with HP-UX 11i v2 AR0609, which is required for Integrity VM A.03.00 and HP-UX 11i v2 AR0712 for VM A.03.50 hosts). Another option is to increase the cluster NODE_TIMEOUT to a value larger than the run delay reported in the syslog file. The default value for this cluster parameter is 2 seconds, and the maximum recommended value is 30 seconds. However, for most installations, a setting between 5-8 seconds is usually more appropriate. Note that by increasing the NODE_TIMEOUT value, this will have the affect of the cluster taking longer to react to an actual node failure. In summary, advantages of VM as Node configurations include: Reducing the number of physical systems required for a cluster by consolidating nodes using VMs Providing standard monitoring and HA for applications without incurring VM boot time during failover VM as Node configurations have the following limitations: VM instances are not highly available A failure of a VM guest is similar to a node failure in a cluster Failover VMs will use additional VM host resources, which could be used by other VMs running on the VM host Application Monitoring The primary function of is to monitor application and other system resources and to react to failures when they occur. Within VMs as Nodes configurations, the VMs themselves are cluster nodes and the system resources and applications under their control can be monitored by just as in standard non-integrity VM cluster configurations. However, with VMs as s configurations, the VM instances are managed as packages and the status of the resources and applications within the VM are not known to. Non-clustered applications previously running on standalone servers can benefit from a level of protection against OS and host system hardware failures by restarting the VM in which they are executing as a package on another VM host. If some level of monitoring is required for applications running in a VM as configuration, custom application monitoring can be implemented. The following are several methods that can be used: Guest-Based Monitoring. A program, or agent, is used to run within the VM and monitor the status of an application also running within the same VM. The monitoring method used by the agent would be dependent on the application being monitored, and can range from verifying the existence of a specific Process ID (PID) to checking application functionality or performance. Another required function of the monitoring agent is to perform some type of recovery action when an application failure is detected. Depending on the type of failure detected, recovery actions can range from attempting to restart the application a specific number of times to halting the VM in which the application is running to trigger a package failover action from the VM host node. When creating a guest-based monitoring agent, there are several implementation options to consider as part of its design. A monitoring agent can be: A process to be run from an HP-UX inittab entry that would drive the monitoring and recovery functions for one or more applications. A script that is invoked at application startup, which specifies application PIDs to be monitored and recovery functions. 21
Implemented by developing user-customizable templates for application monitoring and recovery actions (e.g., restarting the application a specified number of times, halting the VM guest, etc.). Possible recovery actions to be performed by a guest-based monitoring agent in the event of an application failure include: Restarting the application a number of times within the VM. Halting the VM, thus triggering a failover of the VM to another VM host cluster member. Guest-based monitoring of applications has the following advantages: No modifications are required for the VM guest package running on the VM host. No communications are required between the VM guest and VM host. No security issues arise because application management authority is confined to the VM guest. Monitoring mechanisms (e.g., ps command to monitor process IDs) are readily available from the VM guest operating system. The shortcomings of guest-based application monitoring include: A custom application monitor must be developed, tested, and maintained. The VM host has no visibility of applications status running within the VM guest. There is no host management (e.g., starting, halting) of applications other than halting and restarting their corresponding VM guest. Host-Based Monitoring. A program, or agent, is used by the VM host system to probe the status of an application running within the VM guest. In this method, a service defined within the VM guest package can be implemented to monitor an application within the VM guest by either communicating directly with the application or its customized monitoring agent running in the VM guest using a specific external network interface (e.g., a TCP/IP connection on a specific network port, UDP, etc.). The customized monitoring agent running within the VM guest (if implemented) can be designed to provide status information about the application to the monitor service associated with the VM guest package that can be written into log files located on the VM host. The returned application status information would allow to detect an application failure and initiate a halt and failover of its corresponding VM guest package. One example of host-based monitor for a guest-based application would be the periodic retrieval of a known web page from a guest-based web server to verify that it was functioning correctly. As with the guest-based monitoring agent, the host-based agent can recover from a detected application failure by either restarting the application a defined number of times within the VM or halt the VM to trigger a failover of the VM to another VM host cluster member. Using host-based monitoring provides: Centralized monitoring of applications on all VM guests from the VM host. Tracking of individual application failures can be captured using log files on the VM host (must be custom-written). protection of the monitor by configuring the monitor as a package service of the VM guest. The disadvantages of host-based monitoring include: The design and implementation of the monitoring agents and communication links can be complex. The monitors developed are application-specific. 22
Communications between a VM guest and its VM host requires security authentication and authorization access controls. HP is investigating ways to monitor applications residing within VMs as s configurations and will provide additional supported monitoring methods and best practices (e.g., templates, toolkits, product documentation, white papers, etc.) with future releases of Integrity VM. Integrity VM Operating System and Application Support The following is a summary of the operating systems and applications that are supported with the Integrity VM B.04.00 release: VM guest Operating Systems HP-UX 11i v2 (0609 release or later) HP-UX 11i v3 (0703, 0709, 0803, and 0809) Microsoft Windows 2003 Server (Enterprise or Datacenter edition) SP2 for Itanium-based systems (Note: Microsoft Cluster Server (MSCS) is not supported at this time) Linux Red Hat 4 Update 4 and Update 5 SUSE Linux Enterprise Server (SLES) for HP Integrity servers SLES 10 Update 1 and Update 2 Oracle Single instance Oracle 9iR2, 10gR1/R2 Oracle 11g is not supported at this time RAC configurations are not supported at this time SAP SAP software running on HP-UX 11i v2 3rd-Party applications used by SAP depend on support by the 3rd Party ISV applications Integrity VM is binary compatible with HP-UX 11i v2 Itanium native applications (Note that applications with specific device dependencies should be reviewed for supportability) HP Enterprise Cluster Master Toolkit (ECMT) and Developers Toolbox ECMT version B.04.00 and the Developers Toolbox version A.01.00 is supported in VM as node configurations HP Disaster Tolerant Solutions extended distance clusters for VM as package configurations (for all HPVMsupported versions) Metrocluster VM as package configurations using the following version combinations: Metrocluster Continuous Access EVA Metrocluster Continuous Access XP A.03.00 / Integrity VM B.04.00 A.02.00 / Integrity VM A.03.50 A.01.00 / Integrity VM A.03.00 A.08.00 / Integrity VM B.04.00 A.07.00 / Integrity VM A.03.50 A.06.00 / Integrity VM A.03.00 23
Metrocluster EMC SRDF A.07.00 / Integrity VM B.04.00 A.06.00 / Integrity VM A.03.50 A.05.01 / Integrity VM A.03.00 Note: Metrocluster cross-subnet configurations are not supported at this time. Continentalclusters are not supported at this time. HP Integrity VM High Availability Architecture Considerations Integrity VM provides the ability to create Virtual Machines, or virtual hardware systems implemented in software, consisting of a collection of virtual hardware devices that interact with actual physical hardware (CPU, memory, I/O) on a host system. With this virtual to physical hardware association, it is important to consider how these components interact and how they affect the performance and availability of the VMs and the applications running within them. The following sections describe several aspects of VM hardware and software configurations, in general, and with, that should be considered prior to implementing any solution. Additional information on VM configuration restrictions can be found in the HP Integrity Virtual Machines Release Notes and the HP Integrity Virtual Machines Installation, Configuration, and Administration Guide associated with the Integrity VM version being deployed. Networks For VM as configurations: 3 LAN connections are recommended: one LAN for a dedicated heartbeat for the VM host and a primary/standby LAN pair for VM guests, which are monitored by on the VM host. Auto-Port Aggregation (APA) is supported and can be used to provide network bandwidth scalability, load balancing between the physical links, automatic fault detection and HA recovery. (Note that it is important when using APA to have at least two physical NICs configured to avoid a single point of failure for the cluster heartbeat connections.) also has a network monitor that provides network failure detection options for identifying failed network cards based on inbound and outbound message counts and failing over to configured standby LANs. The vswitch monitor is responsible for monitoring the activities of the network monitor and automatically moves the vswitch configuration, when required, between the primary and standby physical network interfaces. (A vswitch is a virtual device that accepts network traffic from one or more VMs and directs network traffic to an associated port on a physical network interface card, or NIC used by a VM guest.) The vswitch monitor is installed as part of the Integrity VM product and requires no user configuration. For VM as Node configurations: 3 LAN connections are recommended: one LAN for a dedicated heartbeat for the VM guest and a primary/standby LAN pair for the VM guest that is monitored by on the VM guest. Linux channel bonding for making network connections highly available is currently not supported in Linux VM guests. It is recommended to use APA LACP_AUTO mode to provide switch port-level redundancy and APA LAN Monitor mode to provide switch/hub-level redundancy on the VM host NICs to make Linux guest networking highly available. 24
For either VM as or VM as Node configurations, your availability and network performance requirements should be used to determine whether VMs should share physical network ports or be assigned their own dedicated ports. Storage Protection Disk storage protection should be performed on the VM host. Implementing a storage protection solution (e.g., RAID mirroring) for the physical storage on the host automatically protects the storage used by the VMs and eliminates the need to implement the same solution for each VM, in addition to minimizing virtualization overhead. Multipathing solutions should also be implemented on the VM host, as they are not supported within VM guests (note this also applies to Native Multipathing with HP-UX 11i v3 guests). Only the primary paths to virtual disks for VMs can be used; secondary paths are not permitted. Logical volumes used as virtual disks can provide their own multipathing capabilities (e.g., LVM PVlinks, VxVM DMP). HP Secure Path and EMC PowerPath are two other supported multipathing options when using HP or EMC disk arrays. HP-UX 11i v3, which includes a Native Multipathing storage stack, is supported with Integrity VM B.04.00 hosts and eliminates the need for using alternative multipathing solutions Performance With both VM as and VM as Node configurations, will move a workload (i.e., VM or application) to a failover node in the cluster in the event of a failure. The cluster design should ensure that all failover nodes have sufficient system resources to run both their existing workloads in addition to the workload that is being failed over. If a node with an existing workload does not have sufficient capacity to handle a failover workload, several options can be considered such as using WLM and TiCAP on the failover node or implementing a standby node for the failover workload. There are several other areas to consider when implementing VMs to achieve the best possible performance. The Best Practices for Integrity Virtual Machines white paper contains additional information on the following VM configuration recommendations. When creating VMs: Consider how using uni-processor vs. multi-processor VMs can affect the overall performance of CPU resources on the VM host. The use of multi-processor VMs may require the tuning of node timeout and heartbeat interval parameters in some instances to avoid false detection of node failures. Avoid over-allocating memory for VMs to prevent memory management interrupts and to allow the remaining memory on the VM host to be used by other VMs. Memory sizes will vary based on VM guest application requirements. The HP Integrity Virtual Machines Installation, Configuration and Administration manual and the HP Integrity Virtual Machines Release Notes have sections describing VM host system requirements that can help you determine the amount of memory that will be required for a VM host based on the memory sizes and number of VM guests that will be running on the VM host. Typical memory requirements are: HP-UX 11i v2 VM guests use 1.5 GB (at least 2GB of memory required when using CFS 5.0) HP-UX 11i v3 VM guests use 3 GB Linux RH4 VM guests use 1.5 GB Linux SLES10 VM guests use 1.5 GB Windows Server VM guests use 1 GB 25
For virtual mass storage: Consider the performance tradeoffs for each storage type (e.g., file, logical volume, disk, partition) as well as the flexibility of managing the storage used by the VMs. The best disk I/O performance for VMs is generally achieved by mapping virtual disks directly to physical disks (or LUNs). Note that VM as Node configurations require physical disk backing stores due to their VM-specific storage implementation. For virtual networking: HP Auto Port Aggregation (APA) can be used to add network capacity to VMs, as well as provide network redundancy. Maintain data center network topologies and their primary functions (e.g., primary/standby LANs) by appropriately mapping vswitches to VM host NICs to preserve existing network functionality and performance. When tuning VMs: The Guest OS for a given VM should be tuned for the specific application running within the VM as recommended by the application provider. Software Upgrades has a rolling upgrade capability that allows for updating and operating system software within specific version ranges while allowing the cluster to remain operational. Rolling upgrades can be performed in both VM as package and node configurations by moving either the VM guest or application packages to an adoptive node and performing an upgrade on the node that was previously running the packages. Once the upgrade has been completed, the packages can be moved back to the upgraded node and the process repeated for the remaining nodes in the cluster. Since moving to Integrity VM B.04.00 requires an upgrade of the VM host from HP-UX 11i v2 to 11i v3, there are several restrictions that need to be considered and procedures followed to successfully complete this upgrade. This information is available in a technical whitepaper titled Upgrading to Integrity VM Version 4.0 from 3.X. Selecting the Best Model to Meet Your Requirements The following steps can help with the selection of the appropriate configuration model for designing and implementing clusters when using VMs for consolidation: Establish Design Goals: With consolidation being the primary goal when using Integrity VM, determine the secondary design goals for the consolidation effort (e.g., high availability for specific applications, good application performance under normal operations, acceptable performance under failover conditions, etc.) Cluster Configuration Design and Implementation: Architect the cluster node configuration to meet the stated design goals. Follow standard HA design practices using redundant components (e.g., I/O cards, disks, etc,). Refer to the HP Managing manual (see the High Availability Technical Documentation link listed at the end of this white paper) as a design guide. 26
Determine which implementation model (VMs as s or VMs as Nodes) would best support the availability and monitoring requirements for your applications by referring to the Example Use Cases and Usage Considerations sections of this white paper. Configure VM hosts and Guests for optimal network, disk I/O utilization, and performance as part of the consolidation design. Refer to the Best Practices for Integrity Virtual Machines white paper available on the HP Integrity VM Information Library web site for helpful tips. Avoid configuration complexity with and Integrity VM where possible to reduce implementation difficulties and ease future support work remember to Keep it Simple. Refer to the Integrity Virtual Machines Installation, Configuration, and Administration manual on http://docs.hp.com for configuration recommendations and for instructions on using the HP for Integrity Virtual Machines Toolkit. Pre-production Testing and Post-production Support: Perform failover testing of all Guest VMs (for VMs as s configurations) and applications (for VMs as Nodes configurations) prior to production release to ensure successful recovery in the event of a failure. Keep all system software (e.g., HP-UX, Integrity VM,, etc.) up-to-date. Obtain the latest patches and take advantage of newly supported features as the Integrity VM product evolves. 27
Summary Putting it All Together With the integration of with Integrity VM, there are several configuration models that can be used to help effectively design clusters when using VMs for consolidation: VMs as s The Virtual Machine is encapsulated within a package allowing failover of the VM between cluster nodes ( runs on the VM host) VMs as Nodes The Virtual Machine is a member of a cluster allowing failover of application packages between other physical or VM nodes in the cluster ( runs within the VM guest) Using the configuration model that best meets your business and application requirements, you can design and implement a complete solution that can achieve flexible workload consolidation, application isolation and high availability with and Integrity VM. Glossary Term Integrity VM VM VM host VM guest Guest OS Physical Node Definition HP Integrity Virtual Machines product Virtual hardware system representing a collection of virtual hardware devices A physical system that includes a host Operating System and Integrity VM Software for executing one or more VM guests A VM and its Operating System running on a VM host Operating System instance installed on the Virtual Machine A single server or npar (hard partition) HP product for creating high availability clusters of HP 9000 or HP Integrity servers A grouping of application services (individual HP-UX Processes) under 28
For more information http://www.hp.com/go/partitions HP Partitioning Continuum for HP-UX 11i, HP, 2008 http://www.hp.com/go/integrity HP Integrity Server Family Overview, HP, 2008 http://www.hp.com/go/vse HP Virtual Server Environment, HP, 2008 http://www.docs.hp.com/en/vse HP Virtual Server (VSE) Technical Documentation, HP, 2008 http://www.docs.hp.com/en/ha HP High Availability Technical Documentation, HP, 2008 http://h71028.www7.hp.com/enterprise/cache/262803-0-0-0-121.html HP Integrity Virtual Machines Information Library, HP, 2008 2006, 2007, 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Oracle is a registered US trademark of Oracle Corporation, Redwood City, California. 4AAi-1170ENW, December 2008