1 Parallels Clustering in Virtuozzo-Based Systems (c)
2 2 C HAPTER 1 This document provides general information on clustering in Virtuozzo-based systems. You will learn what clustering scenarios are supported in Virtuozzo Containers 4.0 and how they can effectively be put into practice in your working environments. The document does not deal with setting up and configuring clusters in Virtuozzo-based systems. For detailed information on these activities, please turn to the following resources: The Deploying Microsoft Clusters in Virtuozzo-Based Systems document familiarizes you with the way to use the MSCS (Microsoft Clustering Service) and NLB (Network Load Balancing) clustering technologies to create reliable and highly available clusters, including Virtuozzo Failover Clusters, in Virtuozzo-based systems. The Deploying RHCS Clusters in Virtuozzo-Based Systems document describes how to set up Virtuozzo failover and data sharing clusters on Hardware Nodes running the RHEL 5 and CentOS 5 operating systems.
3 Introduction 3 Introduction Clustering is a technology providing solutions for increased server availability and efficiency by creating reliable and highly available systems (clusters). Virtuozzo Containers 4.0 allows you to deploy the following types of cluster configurations in your working environments: Virtuozzo Failover Cluster: this cluster configuration can be used to provide failover support for your Virtuozzo Containers installations and to ensure that Virtuozzo vital services and Containers continue running in the event of a planned or unplanned Hardware Node downtime. Container-to-Container cluster: this cluster configuration can be used to provide highavailability and reliability for your mission-critical applications by failing over the application resources from one Container acting as a cluster node to another in the event of a hardware or software failure. NLB cluster: this cluster configuration can be used to distribute client requests among the cluster nodes according to the current network load, thus, greatly increasing your applications and services availability and performance. Data sharing cluster: this cluster configuration can be used to provide simplified manageability of your Containers and Virtuozzo Templates, to allow fast Container migration between cluster nodes, and to provide failover support for Virtuozzo missioncritical services and Containers. The following sections dwell on each of those cluster configurations in detail helping you make a decision about what configuration best suits your needs for performance, high availability, and manageability.
4 Virtuozzo Failover Cluster 4 Virtuozzo Failover Cluster Using the Microsoft Windows and Red Hat clustering software, you can create a Virtuozzo Failover Cluster consisting of two or more Hardware Nodes and providing failover support for Virtuozzo mission-critical services and Containers. If one Hardware Node in the cluster fails or is taken offline as part of a planned maintenance, the services and Containers from the problem server are automatically failed over to another Node in the cluster. The following picture shows a simple cluster configuration that can be used to provide high-availability for your Virtuozzo Containers 4.0 installations: Figure 1: Failing Over Containers
5 Virtuozzo Failover Cluster 5 In this clustering scenario, the Virtuozzo Failover Cluster comprises three active and one standby node. All nodes in the cluster have one and the same version of the Virtuozzo Containers software installed. Each of the active nodes hosts a number of Containers configured as cluster resources and capable of failing over to the standby node if the corresponding active node becomes inaccessible. For example, if the active node hosting Container4, Container5, and Container6 is taken offline as part of a planned or an unplanned event, the clustered software will detect the problem, fail over these Containers to the standby node, and start them there. All the nodes in the cluster are connected to a reliable and fault tolerant shared disk array on a Storage Area Network (SAN). The SAN provides a high-speed communication channel (via SCSI, iscsi, or Fibre Channel) between each node in the cluster and the shared disk array. The disk array is used to store all the cluster configuration data and Virtuozzo resources and allows access to these resources to the active nodes in the cluster. The primary goal of implementing a Virtuozzo Failover Cluster in your working environments is to provide a high degree of availability and reliability for Virtuozzo mission-critical services and Containers by treating them as cluster resources and failing them over to the standby Hardware Node in the event of an active node failure. However, you may also think of deploying this type of cluster to provide high availability for your applications and services (both cluster-aware and cluster-unaware) by running them inside Containers residing on any of the active Hardware Nodes participating in the Virtuozzo Failover Cluster. Note: The clustering scenario described above is supported in: -Virtuozzo Containers 4.0 for Windows utilizing the Microsoft Cluster Service software; -Virtuozzo Containers 4.0 for Linux utilizing the Red Hat Cluster Suite software.
6 Container-to-Container Cluster 6 Container-to-Container Cluster Deploying a Container-to-Container cluster allows you to provide high availability and reliability for your mission-critical applications (both cluster-aware and cluster-unaware). In this type of cluster, one node is active and owns all the application resources at any one time, and another node is in the standby state. If the active node fails, the application resources are failed over to the standby node which comes online. In Virtuozzo Containers 4.0, any Container can be configured to operate as a full member in a cluster supporting virtually every clustering configuration that can be implemented for clusteraware and cluster-unaware applications on standalone servers. Like any other standalone server, a Container can be set up to function as an active cluster node running some vital application or it can be configured to take over the failed application resources of another cluster node. For example, you can unite two Containers into a Container-to-Container cluster and make this cluster host the Active Directory application. In this case the clustering software will keep check of this application and, when detecting a failure inside the active Container, quickly assign the Active Directory application responsibility to the standby Container and start it there. At the same time, you can consolidate a Container and a standalone server to provide high availability for the Active Directory application, which is shown in the picture below:
7 Container-to-Container Cluster 7 In this example the cluster consists of: Figure 2: Failing Over Applications Three active nodes. These are standalone servers hosting two cluster-aware applications (Active Directory and DHCP) and one cluster-unaware application (Apache HTTP Server) and owning all the resources of these applications. The applications are configured to run in the cluster and can be failed over to a standby node in the cluster in the event of a hardware or software failure. Three standby nodes. These are Containers residing on a single Hardware Node running the Virtuozzo Containers 4.0 software. The Containers are configured to take control over the corresponding application resources (Apache HTTP Server, Active Directory, or DHCP) in the case of an active node failure. For example, if the Active Directory application becomes inaccessible on the node where it is currently hosted, its resources will be failed over to Container 2 residing on the Hardware Node and configured as the standby node for this application. All nodes in the cluster communicate with each other across a local interconnect (Fibre Channel, SCSI, or iscsi) and share a common disk array on a SAN. The shared disk array is used to store all the cluster configuration and application data.
8 Container-to-Container Cluster 8 Using the Virtuozzo Containers technology in this clustering configuration brings you the following benefits: Hardware cost reduction instead of deploying a physical standby server for every active server in a Failover Cluster, you set up just one with as many Virtuozzo Containers running on it as needed. Each Container can be set up as a standby server, just like a physical machine, but will share the resources of the same host with the rest of the Container-based standby servers. More efficient hardware utilization a standby server in a Failover Cluster normally spends most of the time in the standby mode. In a scenario when a standalone physical server is used, the server resources are greatly underutilized. Using Containers to create the standby nodes enables a high density of Containers on the same server and brings the hardware utilization to the optimum level. Once the active standalone server fails, the workload fails over to the passive Container, which, due to the unique resource management and scalability available with Virtuozzo Containers 4.0, can be scaled to the entire resources of the server, potentially maintaining the same level of processing on the initially standalone server. Maintenance cost reduction by using the Virtuozzo Containers software, you can consolidate many physical standby servers into one or fewer servers, which will significantly reduce the costs associated with hardware and software maintenance. Along with using physical SAN-based storage devices, Virtuozzo Containers 4.0 also allows you to create special loopback files on the Hardware Node and to configure them as shared storage devices for your clusters. However, using loopback files as cluster storage devices imposes a number of limitations on the cluster configuration. The main of these limitations are listed below: A cluster may consist of Containers only. All Containers participating in such a cluster must reside on one and the same Hardware Node, i.e.on the Node where the corresponding loopback file is located. All cluster and application resources are hosted on a single Hardware Node and, therefore, not protected from hardware crashes. In spite of all these limitations 'loopback file'-based clusters are perfectly suitable for dealing with software crashes and administrative errors. They can also be used for testing and demonstrating purposes. Note: The clustering scenario described above is supported in Virtuozzo Containers 4.0 for Windows utilizing the Microsoft Service Cluster software.
9 NLB Cluster 9 NLB Cluster Network Load Balancing (NLB) provides a high level of reliability, availability, and scalability for your mission-critical applications and services. In Virtuozzo Containers 4.0, Containers can be combined with each other or with standalone servers to form an NLB cluster. The following picture shows a typical NLB cluster configuration that can be deployed in your working environments: Figure 3: Deploying NLB Cluster In this configuration the cluster consists of six Containers residing on two Hardware Nodes: Container 1, Container 2, and Container 3 on the first Hardware Node and Container 1, Container 2, and Container 3 on the second Hardware Node. Each Container runs a separate copy of an application (e.g. Web server) and is configured to be able to respond to client requests. Network Load Balancing distributes incoming client requests across the Containers in the cluster in accordance with the current network load. This cluster configuration is defined by the following main features: If one Container on a certain Hardware Node fails, the load is automatically redistributed among the remaining active Containers on two Nodes. When the failed Container goes online again, it transparently rejoins the cluster and regains its share of the workload. If all Containers on one and the same Node fail (e.g. when the Hardware Node is taken offline as part of an unplanned event), the incoming network traffic is redirected to Container1 and Container2 on the second Node. When the Containers become functional again, they are transparently rejoined to the cluster.
10 NLB Cluster 10 As the traffic to your network applications increases or as your applications require more server power, you can dynamically add another Node to the cluster and create three more Containers on this Node. Note: The clustering scenario described above is supported in Virtuozzo Containers 4.0 for Windows utilizing the Microsoft Network Load Balancing software.
11 Data Sharing Cluster 11 Data Sharing Cluster Data sharing clusters are server farms that share storage devices on a storage area network (SAN) and share data on those storage devices. In a data sharing cluster, data can be written or read by any server to or from any file on a common shared storage device. The following picture illustrates the structure of a typical data sharing cluster: Figure 4: Deploying Data Sharing Cluster In this example the data sharing cluster comprises four identical Hardware Nodes running the Virtuozzo Containers 4.0 software. Three Hardware Nodes in the cluster are active meaning that they are currently hosting a number of Containers, and one Hardware Node is configured as a standby server (e.g. it can be used to host the Containers from some active node when this node is taken offline for maintenance reasons).
12 Data Sharing Cluster 12 The nodes are connected with a shared disk array on a storage area network (SAN) over a highspeed communication channel (via iscsi or Fibre Channel). The Virtuozzo Templates and Container private data from all servers in the cluster are kept on this shared disk array and can simultaneously be accessed by any server. Deploying a data sharing cluster made up of Virtuozzo Hardware Nodes allows you to achieve the following main goals: Simplify the process of managing your Containers and Virtuozzo Templates since all Containers and templates are residing on a single disk array shared by all nodes in the cluster. Greatly speed up the process of migrating running Containers between the cluster nodes. In fact, the migration is almost imperceptible to users since all Container data in the cluster is stored on a shared SAN storage and there is no need to move this data between the Nodes during the Container migration. Provide failover support for Virtuozzo vital services and Containers. Each server in the cluster is running the clustering software responsible for monitoring the health of Virtuozzo Containers installations and failing over the services and Containers from a failed node to the standby node. Note: The clustering scenario described above is supported in Virtuozzo Containers 4.0 for Linux utlizing Red Hat Cluster Suite and Red Hat GFS.
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
Microsoft System Center 2012 R2 Why Microsoft? For Virtualizing & Managing SharePoint July 2014 v1.0 2014 Microsoft Corporation. All rights reserved. This document is provided as-is. Information and views
Double-Take Replication in the VMware Environment: Building DR solutions using Double-Take and VMware Infrastructure and VMware Server Double-Take Software, Inc. 257 Turnpike Road; Suite 210 Southborough,
WHITE PAPER: VIRTUALIZE BUSINESS-CRITICAL APPLICATIONS.............. WITH..... CONFIDENCE..................... Confidently Virtualize Business-critical Applications in Microsoft Hyper-V with Symantec ApplicationHA
An Oracle White Paper June 2013 Oracle Real Application Clusters One Node Executive Overview... 1 Oracle RAC One Node 12c Overview... 2 Best In-Class Oracle Database Availability... 5 Better Oracle Database
Microsoft Corporation and HP Using Network Attached Storage for Reliable Backup and Recovery Microsoft Corporation Published: March 2010 Abstract Tape-based backup and restore technology has for decades
NetVault, NDMP and Network Attached Storage Simplicity and power for NAS Written by Adrian Moir, Dell Scott Hetrick, Dell Abstract This technical brief explains how Network Data Management Protocol (NDMP)
Best Practices and Recommendations for Scale-up Deployments of SAP HANA on VMware vsphere DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction...................................................................
WHITE PAPER: CA ARCserve Backup Network Data Management Protocol (NDMP) Network Attached Storage (NAS) Option: Integrated Protection for Heterogeneous NAS Environments CA ARCserve Backup: Protecting heterogeneous
HP StoreOnce Catalyst and HP Data Protector 7 Implementation and Best Practice Guide Release 1 Executive Summary This guide is intended to enable the reader to understand the basic technology of HP StoreOnce
White Paper EMC VNXe HIGH AVAILABILITY Overview Abstract This white paper discusses the high availability (HA) features in the EMC VNXe system and how you can configure a VNXe system to achieve your goals
NDMP Backup of Dell EqualLogic FS Series NAS using CommVault Simpana A Dell EqualLogic Reference Architecture Dell Storage Engineering June 2013 Revisions Date January 2013 June 2013 Description Initial
WHITE PAPER Addressing Virtualization and High-Availability Needs with Sun Solaris Cluster Sponsored by: Sun Microsystems Jean S. Bozman October 2009 EXECUTIVE SUMMARY Global Headquarters: 5 Speen Street
Data Protection for Isilon Scale-Out NAS A Data Protection Best Practices Guide for Isilon IQ and OneFS By David Thomas, Solutions Architect An Isilon Systems Best Practices Guide May 2009 ISILON SYSTEMS
white paper Public or Private Cloud: The Choice is Yours Current Cloudy Situation Facing Businesses There is no debate that most businesses are adopting cloud services at a rapid pace. In fact, a recent
The Incremental Advantage: MIGRATE TRADITIONAL APPLICATIONS FROM YOUR ON-PREMISES VMWARE ENVIRONMENT TO THE HYBRID CLOUD IN FIVE STEPS CONTENTS Introduction..................... 2 Five Steps to the Hybrid
Special Publication 800-125 Guide to Security for Full Virtualization Technologies Recommendations of the National Institute of Standards and Technology Karen Scarfone Murugiah Souppaya Paul Hoffman NIST
by Stratus Technologies, The Availability Company October, 2007 B E N E F I T F RO M VMware Infrastructure 3 and Stratus Continuous Availability: Going Beyond High Availability for Business-Critical Virtualization
How AWS Pricing Works May 2015 (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction...
Disk-based Backup for Virtualized Environment via Infortrend EonStor DS, ESVA, EonNAS 3000 / 5000 and Veeam Backup and Replication Application Note Abstract The document describes, as an example the usage
Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup EMC VSPEX Abstract
TECHNICAL BRIEF:........................................ Running Highly Available, High Performance Databases in a SAN-Free Environment Who should read this paper Architects, application owners and database
Plug Into The Cloud with Oracle Database 12c ORACLE WHITE PAPER DECEMBER 2014 Disclaimer The following is intended to outline our general product direction. It is intended for information purposes only,