Red Hat Enterprise Linux 7 High Availability Add-On Administration. Configuring and Managing the High Availability Add-On
|
|
|
- Clemence Gardner
- 10 years ago
- Views:
Transcription
1 Red Hat Enterprise Linux 7 High Availability Add-On Administration Configuring and Managing the High Availability Add-On
2
3 Red Hat Enterprise Linux 7 High Availability Add-On Administration Configuring and Managing the High Availability Add-On
4 Legal Notice Copyright 2015 Red Hat, Inc. and others. This do cument is licensed by Red Hat under the Creative Co mmo ns Attributio n- ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be remo ved. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other co untries. Linux is the registered trademark of Linus Torvalds in the United States and other countries. Java is a registered trademark o f Oracle and/o r its affiliates. XFS is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/o r o ther co untries. MySQL is a registered trademark of MySQL AB in the United States, the European Union and o ther co untries. Node.js is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project. The OpenStack Wo rd Mark and OpenStack Lo go are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endo rsed o r spo nso red by the OpenStack Fo undatio n, o r the OpenStack co mmunity. All o ther trademarks are the pro perty o f their respective o wners. Abstract High Availability Add- On Administratio n describes the co nfiguratio n and management o f the High Availability Add- On fo r Red Hat Enterprise Linux 7.
5 T able of Cont ent s Table of Contents. Chapt..... er Creat..... ing... a.. Red.... Hat... High Availabilit y.. Clust.... er.. wit... h.. Pacemaker Cluster So ftware Installatio n Cluster Creatio n Fencing Co nfig uratio n 4. Chapt..... er An... act... ive/passive Apache Web.... Server in.. a.. Red.... Hat.... High.... Availabilit y. Clust..... er Co nfig uring an LVM Vo lume with an ext4 File System Web Server Co nfig uratio n Exclusive Activatio n o f a Vo lume G ro up in a Cluster Creating the Reso urces and Reso urce G ro up s with the p cs Co mmand Testing the Reso urce Co nfig uratio n 12. Chapt..... er An... act... ive/passive NFS.... Server in.. a.. Red.... Hat... High..... Availabilit y. Clust..... er Creating the NFS Cluster Co nfig uring an LVM Vo lume with an ext4 File System NFS Share Setup Exclusive Activatio n o f a Vo lume G ro up in a Cluster Co nfig uring the Cluster Reso urces Testing the Reso urce Co nfig uratio n 21. Revision Hist... ory
6 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion Chapter 1. Creating a Red Hat High-Availability Cluster with Pacemaker This chapter describes the procedure for creating a Red Hat High Availability two-node cluster using pcs. After you have created a cluster, you can configure the resources and resource groups that you require. Configuring the cluster provided in this chapter requires that your system include the following components: 2 nodes, which will be used to create the cluster. In this example, the nodes used are z1. exampl e. co m and z2. exampl e. co m. Network switches for the private network, required for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches. A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of zapc. exampl e. co m. This chapter is divided into three sections. Section 1.1, Cluster Software Installation provides the procedure for installing the cluster software. Section 1.2, Cluster Creation provides the procedure for configuring a two-node cluster. Section 1.3, Fencing Configuration provides the procedure for configuring fencing devices for each node of the cluster Clust er Soft ware Inst allat ion The procedure for installing and configuring a cluster is as follows. 1. On each node in the cluster, install the Red Hat High Availability Add-On software packages along with all available fence agents from the High Availability channel. # yum i nstal l pcs fence-ag ents-al l 2. If you are running the fi rewal l d daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On. Note You can determine whether the fi rewal l d daemon is installed on your system with the rpm -q fi rewal l d command. If the fi rewal l d daemon is installed, you can determine whether it is running with the fi rewal l -cmd --state command. # fi rewal l -cmd --permanent --ad d -servi ce= hi g h-avai l abi l i ty # fi rewal l -cmd --ad d -servi ce= hi g h-avai l abi l i ty 2
7 Chapt er 1. Creat ing a Red Hat High- Availabilit y Clust er wit h Pacemaker 3. In order to use pcs to configure the cluster and communicate among the nodes, you must set a password on each node for the user ID hacl uster, which is the the pcs administration account. It is recommended that the password for user hacl uster be the same on each node. # passwd hacl uster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully. 4. Before the cluster can be configured, the pcsd daemon must be started and enabled to boot on startup on each node. This daemon works with the pcs command to manage configuration across the nodes in the cluster. On each node in the cluster, execute the following commands to start the pcsd service and to enable pcsd at system start. # systemctl start pcsd. servi ce # systemctl enabl e pcsd. servi ce 5. Authenticate the pcs user hacl uster for each node in the cluster on the node from which you will be running pcs. The following command authenticates user hacl uster on z1. exampl e. co m for both of the nodes in the example two-node cluster, z1. exampl e. co m and z2. exampl e. co m. root@ z1 ~]# pcs cl uster auth z1. exampl e. co m z2. exampl e. co m Username: hacluster Password: z1.example.com: Authorized z2.example.com: Authorized 1.2. Clust er Creat ion This procedure creates a Red Hat High Availability Add-On cluster that consists of the nodes z1. exampl e. co m and z2. exampl e. co m. 1. Execute the following command from z1. exampl e. co m to create the two-node cluster mycl uster that consists of nodes z1. exampl e. co m and z2. exampl e. co m. This will propagate the cluster configuration files to both nodes in the cluster. This command includes the --start option, which will start the cluster services on both nodes in the cluster. [root@ z1 ~]# pcs cl uster setup --start --name my_cl uster \ z1. exampl e. co m z2. exampl e. co m z1.example.com: Succeeded z1.example.com: Starting Cluster... z2.example.com: Succeeded z2.example.com: Starting Cluster Enable the cluster services to run on each node in the cluster when the node is booted. 3
8 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion Note For your particular environment, you may choose to leave the cluster services disabled by skipping this step. This allows you to ensure that if a node goes down, any issues with your cluster or your resources are resolved before the node rejoins the cluster. If you leave the cluster services disabled, you will need to manually start the services when you reboot a node by executing the pcs cl uster start command on that node. # pcs cl uster enabl e --al l You can display the current status of the cluster with the pcs cl uster status command. [root@ z1 ~]# pcs cl uster status Cluster Status: Last updated: Thu Jul 25 13:01: Last change: Thu Jul 25 13:04: via crmd on z2.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: el7-9abe687 2 Nodes configured 0 Resources configured 1.3. Fencing Configurat ion You must configure a fencing device for each node in the cluster. For general information about configuring fencing devices, see the Red Hat Enterprise Linux 7 High Availability Add-On Reference. Note When configuring a fencing device, you should ensure that your fencing device does not share power with the node that it controls. This example uses the APC power switch with a host name of zapc. exampl e. co m to fence the nodes, and it uses the fence_apc_snmp fencing agent. Because both nodes will be fenced by the same fencing agent, you can configure both fencing devices as a single resource, using the pcmk_ho st_map and pcmk_ho st_l i st options. You create a fencing device by configuring the device as a sto ni th resource with the pcs sto ni th create command. The following command configures a sto ni th resource named myapc that uses the fence_apc_snmp fencing agent for nodes z1. exampl e. co m and z2. exampl e. co m. The pcmk_ho st_map option maps z1. exampl e. co m to port 1, and z2. exampl e. co m to port 2. The login value and password for the APC device are both apc. By default, this device will use a monitor interval of 60s for each node. Note that you can use an IP address when specifying the host name for the nodes. [root@ z1 ~]# pcs sto ni th create myapc fence_apc_snmp params \ i pad d r= "zapc. exampl e. co m" pcmk_ho st_map= "z1. exampl e. co m: 1;z2. exampl e. co m: 2" \ 4
9 Chapt er 1. Creat ing a Red Hat High- Availabilit y Clust er wit h Pacemaker pcmk_ho st_check= "stati c-l i st" pcmk_ho st_l i st= "z1. exampl e. co m,z2. exampl e. co m" \ l o g i n= "apc" passwd = "apc" Note When you create a fence_apc_snmp sto ni th device, you may see the following warning message, which you can safely ignore: Warning: missing required option(s): 'port, action' for resource type: stonith:fence_apc_snmp The following command displays the parameters of an existing STONITH device. [root@ rh7-1 ~]# pcs sto ni th sho w myapc Resource: myapc (class=stonith type=fence_apc_snmp) Attributes: ipaddr=zapc.example.com pcmk_host_map=z1.example.com:1;z2.example.com:2 pcmk_host_check=staticlist pcmk_host_list=z1.example.com,z2.example.com login=apc passwd=apc Operations: monitor interval=60s (myapc-monitor-interval-60s) 5
10 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion Chapter 2. An active/passive Apache Web Server in a Red Hat High Availability Cluster This chapter describes how to configure an active/passive Apache web server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster using pcs to configure cluster resources. In this use case, clients access the Apache web server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption. Figure 2.1, Apache Web Server in a Red Hat High Availability Two-Node Cluster shows a high-level overview of the cluster. The cluster is a two-node Red Hat High Availability cluster which is configured with a network power switch and with shared storage. The cluster nodes are connected to a public network, for client access to the Apache web server through a virtual IP. The Apache server runs on either Node 1 or Node 2, each of which has access to the storage on which the Apache data is kept. Figure 2.1. Apache Web Server in a Red Hat High Availability Two-Node Cluster This use case requires that your system include the following components: A 2-node Red Hat High Availability cluster with power fencing configured for each node. This procedure uses the cluster example provided in Chapter 1, Creating a Red Hat High-Availability Cluster with Pacemaker. A public virtual IP address, required for the Apache web server. Shared storage for the nodes in the cluster, using iscsi or Fibre Channel. The cluster is configured with an Apache resource group, which contains the cluster components that the web server requires: an LVM resource, a file system resource, an IP address resource, and a web server resource. This resource group can fail over from one node of the cluster to the other, allowing either node to run the web server. Before creating the resource group for this cluster, you will 6
11 Chapt er 2. An act ive/passive Apache Web Server in a Red Hat High Availabilit y Clust er perform the following procedures: 1. Configure an ext4 file system mounted on the logical volume my_l v, as described in Section 2.1, Configuring an LVM Volume with an ext4 File System. 2. Configure a web server, as described in Section 2.2, Web Server Configuration. 3. Ensure that only the cluster is capable of activating the volume group that contains my_l v, and that the volume group will not be activated outside of the cluster on startup, as described in Section 2.3, Exclusive Activation of a Volume Group in a Cluster. After performing these procedures, you create the resource group and the resources it contains, as described in Section 2.4, Creating the Resources and Resource Groups with the pcs Command Configuring an LVM Volume wit h an ext 4 File Syst em This use case requires that you create an LVM logical volume on storage that is shared between the nodes of the cluster. The following procedure creates an LVM logical volume and then creates an ext4 file system on that volume. In this example, the shared partition /d ev/sd b1 is used to store the LVM physical volume from which the LVM logical volume will be created. Note LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only. Since the /d ev/sd b1 partition is storage that is shared, you perform this procedure on one node only, 1. Create an LVM physical volume on partition /d ev/sd b1. # pvcreate /d ev/sd b1 Physical volume "/dev/sdb1" successfully created 2. Create the volume group my_vg that consists of the physical volume /d ev/sd b1. # vg create my_vg /d ev/sd b1 Volume group "my_vg" successfully created 3. Create a logical volume using the volume group my_vg. # l vcreate -L4 50 -n my_l v my_vg Rounding up size to full physical extent MiB Logical volume "my_lv" created You can use the l vs command to display the logical volume. # l vs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my_lv my_vg -wi-a m 7
12 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion Create an ext4 file system on the logical volume my_l v. # mkfs. ext4 /d ev/my_vg /my_l v mke2fs (21-Jan-2013) Filesystem label= OS type: Linux Web Server Configurat ion The following procedure configures an Apache web server. 1. Ensure that the Apache HTTPD server is installed on each node in the cluster. You also need the wg et tool installed on the cluster to be able to check the status of the Apache web server. On each node, execute the following command. # yum i nstal l -y httpd wg et 2. In order for the Apache resource agent to get the status of the Apache web server, ensure that the following text is present in the /etc/httpd /co nf/httpd. co nf file on each node in the cluster, and ensure that it has not been commented out. If this text is not already present, add the text to the end of the file. <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from </Location> 3. Create a web page for Apache to serve up. On one node in the cluster, mount the file system you created in Section 2.1, Configuring an LVM Volume with an ext4 File System, create the file i nd ex. html on that file system, then unmount the file system. # mo unt /d ev/my_vg /my_l v /var/www/ # mkd i r /var/www/html # mkd i r /var/www/cg i -bi n # mkd i r /var/www/erro r # resto reco n -R /var/www # cat <<-END >/var/www/html /i nd ex. html <html > <bo d y>hel l o </bo d y> </html > END # umo unt /var/www 2.3. Exclusive Act ivat ion of a Volume Group in a Clust er 8
13 Chapt er 2. An act ive/passive Apache Web Server in a Red Hat High Availabilit y Clust er The following procedure configures the volume group in a way that will ensure that only the cluster is capable of activating the volume group, and that the volume group will not be activated outside of the cluster on startup. If the volume group is activated by a system outside of the cluster, there is a risk of corrupting the volume group' s metadata. This procedure modifies the vo l ume_l i st entry in the /etc/l vm/l vm. co nf configuration file. Volume groups listed in the vo l ume_l i st entry are allowed to automatically activate on the local node outside of the cluster manager's control. Volume groups related to the node's local root and home directories should be included in this list. All volume groups managed by the cluster manager must be excluded from the vo l ume_l i st entry. Note that this procedure does not require the use of cl vmd. Perform the following procedure on each node in the cluster. 1. D etermine which volume groups are currently configured on your local storage with the following command. This will output a list of the currently-configured volume groups. If you have space allocated in separate volume groups for root and for your home directory on this node, you will see those volumes in the output, as in this example. # vg s --no head i ng s -o vg _name my_vg rhel_home rhel_root 2. Add the volume groups other than my_vg (the volume group you have just defined for the cluster) as entries to vo l ume_l i st in the /etc/l vm/l vm. co nf configuration file. For example, if you have space allocated in separate volume groups for root and for your home directory, you would uncomment the vo l ume_l i st line of the l vm. co nf file and add these volume groups as entries to vo l ume_l i st as follows: volume_list = [ "rhel_root", "rhel_home" ] Note If no local volume groups are present on a node to be activated outside of the cluster manager, you must still initialize the vo l ume_l i st entry as vo l ume_l i st = []. 3. Rebuild the i ni tramfs boot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Update the i ni tramfs device with the following command. This command may take up to a minute to complete. # d racut -H -f /bo o t/i ni tramfs-$(uname -r). i mg $(uname -r) 4. Reboot the node. 9
14 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion Note If you have installed a new Linux kernel since booting the node on which you created the boot image, the new i ni trd image will be for the kernel that was running when you created it and not for the new kernel that is running when you reboot the node. You can ensure that the correct i ni trd device is in use by running the uname -r command before and after the reboot to determine the kernel release that is running. If the releases are not the same, update the i ni trd file after rebooting with the new kernel and then reboot the node. 5. When the node has rebooted, check whether the cluster services have started up again on that node by executing the pcs cl uster status command on that node. If this yields the message Erro r: cl uster i s no t currentl y runni ng o n thi s no d e then run the following command. # pcs cl uster start Alternately, you can wait until you have rebooted each node in the cluster and start cluster services on each of the nodes with the following command. # pcs cl uster start --al l 2.4. Creat ing t he Resources and Resource Groups wit h t he pcs Command This use case requires that you create four cluster resources. To ensure these resources all run on the same node, they are configured as part of the resource group apacheg ro up. The resources to create are as follows, listed in the order in which they will start. 1. An LVM resource named my_l vm that uses the LVM volume group you created in Section 2.1, Configuring an LVM Volume with an ext4 File System. 2. A Fi l esystem resource named my_fs, that uses the filesystem device /d ev/my_vg /my_l v you created in Section 2.1, Configuring an LVM Volume with an ext4 File System. 3. An IP ad d r2 resource, which is a floating IP address for the apacheg ro up resource group. The IP address must not be one already associated with a physical node. If the IP ad d r2 resource's NIC device is not specified, the floating IP must reside on the same network as the statically assigned IP addresses used by the cluster nodes, otherwise the NIC device to assign the floating IP address can not be properly detected. 4. An apache resource named Websi te that uses the i nd ex. html file and the Apache configuration you defined in Section 2.2, Web Server Configuration. The following procedure creates the resource group apacheg ro up and the resources that the group contains. The resources will start in the order in which you add them to the group, and they will stop in the reverse order in which they are added to the group. Run this procedure from one node of the cluster only. 1. The following command creates the LVM resource my_l vm. This command specifies the excl usi ve= true parameter to ensure that only the cluster is capable of activating the LVM logical volume. Because the resource group apacheg ro up does not yet exist, this command creates the resource group. 10
15 Chapt er 2. An act ive/passive Apache Web Server in a Red Hat High Availabilit y Clust er [root@ z1 ~]# pcs reso urce create my_l vm LVM vo l g rpname= my_vg \ excl usi ve= true --g ro up apacheg ro up When you create a resource, the resource is started automatically. You can use the following command to confirm that the resource was created and has started. # pcs reso urce sho w Resource Group: apachegroup my_lvm (ocf::heartbeat:lvm): Started You can manually stop and start an individual resource with the pcs reso urce d i sabl e and pcs reso urce enabl e commands. 2. The following commands create the remaining resources for the configuration, adding them to the existing resource group apacheg ro up. [root@ z1 ~]# pcs reso urce create my_fs Fi l esystem \ d evi ce= "/d ev/my_vg /my_l v" d i recto ry= "/var/www" fstype= "ext4 " -- g ro up \ apacheg ro up [root@ z1 ~]# pcs reso urce create Vi rtual IP IP ad d r2 i p= \ ci d r_netmask= 24 --g ro up apacheg ro up [root@ z1 ~]# pcs reso urce create Websi te apache \ co nfi g fi l e= "/etc/httpd /co nf/httpd. co nf" \ statusurl = "http: // /server-status" --g ro up apacheg ro up 3. After creating the resources and the resource group that contains them, you can check the status of the cluster. Note that all four resources are running on the same node. [root@ z1 ~]# pcs status Cluster name: my_cluster Last updated: Wed Jul 31 16:38: Last change: Wed Jul 31 16:42: via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: el7-9abe687 2 Nodes configured 6 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:lvm): Started z1.example.com my_fs (ocf::heartbeat:filesystem): Started z1.example.com VirtualIP (ocf::heartbeat:ipaddr2): Started z1.example.com Website (ocf::heartbeat:apache): Started z1.example.com Note that if you have not configured a fencing device for your cluster, as described in 11
16 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion Section 1.3, Fencing Configuration, by default the resources do not start. 4. Once the cluster is up and running, you can point a browser to the IP address you defined as the IP ad d r2 resource to view the sample display, consisting of the simple word "Hello". Hello If you find that the resources you configured are not running, you can run the pcs reso urce d ebug -start resource command to test the resource configuration. For information on the pcs reso urce d ebug -start command, see the High Availability Add-On Reference manual T est ing t he Resource Configurat ion In the cluster status display shown in Section 2.4, Creating the Resources and Resource Groups with the pcs Command, all of the resources are running on node z1. exampl e. co m. You can test whether the resource group fails over to node z2. exampl e. co m by using the following procedure to put the first node in stand by mode, after which the node will no longer be able to host resources. 1. The following command puts node z1. exampl e. co m in stand by mode. root@ z1 ~]# pcs cl uster stand by z1. exampl e. co m 2. After putting node z1 in standby mode, check the cluster status. Note that the resources should now all be running on z2. [root@ z1 ~]# pcs status Cluster name: my_cluster Last updated: Wed Jul 31 17:16: Last change: Wed Jul 31 17:18: via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: el7-9abe687 2 Nodes configured 6 Resources configured Node z1.example.com (1): standby Online: [ z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:lvm): Started z2.example.com my_fs (ocf::heartbeat:filesystem): Started z2.example.com VirtualIP (ocf::heartbeat:ipaddr2): Started z2.example.com Website (ocf::heartbeat:apache): Started z2.example.com The web site at the defined IP address should still display, without interruption. 3. To remove z1 from stand by mode, run the following command. 12
17 Chapt er 2. An act ive/passive Apache Web Server in a Red Hat High Availabilit y Clust er root@ z1 ~]# pcs cl uster unstand by z1. exampl e. co m Note Removing a node from stand by mode does not in itself cause the resources to fail back over to that node. For information on controlling which node resources can run on, see the chapter on configuring cluster resources in the Red Hat High Availability Add- On Reference. 13
18 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion Chapter 3. An active/passive NFS Server in a Red Hat High Availability Cluster This chapter describes how to configure a highly available active/passive NFS server on a two-node Red Hat Enterprise Linux High Availability Add-On cluster using shared storage. The procedure uses pcs to configure Pacemaker cluster resources. In this use case, clients access the NFS file system through a floating IP address. The NFS server runs on one of two nodes in the cluster. If the node on which the NFS server is running becomes inoperative, the NFS server starts up again on the second node of the cluster with minimal service interruption. This use case requires that your system include the following components: Two nodes, which will be used to create the cluster running the Apache web server. In this example, the nodes used are z1. exampl e. co m and z2. exampl e. co m. A power fencing device for each node of the webfarm cluster. This example uses two ports of the APC power switch with a host name of zapc. exampl e. co m. A public virtual IP address, required for the NFS server. Shared storage for the nodes in the cluster, using iscsi or Fibre Channel. Configuring a highly available active/passive NFS server on a two-node Red Hat Enterprise Linux High requires that you perform the following steps. 1. Create the cluster that will run the NFS server and configure fencing for each node in the cluster, as described in Section 3.1, Creating the NFS Cluster. 2. Configure an ext4 file system mounted on the LVM logical volume my_l v on the shared storage for the nodes in the cluster, as described in Section 3.2, Configuring an LVM Volume with an ext4 File System. 3. Configure an NFS share on the shared storage on the LVM logical volume, as described in Section 3.3, NFS Share Setup. 4. Ensure that only the cluster is capable of activating the LVM volume group that contains the logical volume my_l v, and that the volume group will not be activated outside of the cluster on startup, as described in Section 3.4, Exclusive Activation of a Volume Group in a Cluster. 5. Create the cluster resources as described in Section 3.5, Configuring the Cluster Resources. 6. Test the NFS server you have configured, as described in Section 3.6, Testing the Resource Configuration Creat ing t he NFS Clust er Use the following procedure to install and create the NFS cluster. 1. Install the cluster software on nodes z1. exampl e. co m and z2. exampl e. co m, using the procedure provided in Section 1.1, Cluster Software Installation. 2. Create the two-node webfarm cluster that consists of z1. exampl e. co m and z2. exampl e. co m, using the procedure provided in Section 1.2, Cluster Creation. As in that example procedure, this use case names the cluster my_cl uster. 14
19 Chapt er 3. An act ive/passive NFS Server in a Red Hat High Availabilit y Clust er 3. Configuring fencing devices for each node of the webfarm cluster, using the procedure provided in Section 1.3, Fencing Configuration. This example configures fencing using two ports of the APC power switch with a host name of zapc. exampl e. co m Configuring an LVM Volume wit h an ext 4 File Syst em This use case requires that you create an LVM logical volume on storage that is shared between the nodes of the cluster. The following procedure creates an LVM logical volume and then creates an ext4 file system on that volume. In this example, the shared partition /d ev/sd b1 is used to store the LVM physical volume from which the LVM logical volume will be created. Note LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only. Since the /d ev/sd b1 partition is storage that is shared, you perform this procedure on one node only, 1. Create an LVM physical volume on partition /d ev/sd b1. [root@ z1 ~]# pvcreate /d ev/sd b1 Physical volume "/dev/sdb1" successfully created 2. Create the volume group my_vg that consists of the physical volume /d ev/sd b1. [root@ z1 ~]# vg create my_vg /d ev/sd b1 Volume group "my_vg" successfully created 3. Create a logical volume using the volume group my_vg. [root@ z1 ~]# l vcreate -L4 50 -n my_l v my_vg Rounding up size to full physical extent MiB Logical volume "my_lv" created You can use the l vs command to display the logical volume. [root@ z1 ~]# l vs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my_lv my_vg -wi-a m Create an ext4 file system on the logical volume my_l v. [root@ z1 ~]# mkfs. ext4 /d ev/my_vg /my_l v mke2fs (21-Jan-2013) Filesystem label= OS type: Linux... 15
20 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion 3.3. NFS Share Set up The following procedure configures the NFS share for the NFS daemon failover. You need to perform this procedure on only one node in the cluster. 1. Create the /nfsshare directory. [root@ z1 ~]# mkd i r /nfsshare 2. Mount the ext4 file system that you created in Section 3.2, Configuring an LVM Volume with an ext4 File System on the /nfsshare directory. [root@ z1 ~]# mo unt /d ev/my_vg /my_l v /nfsshare 3. Create an expo rts directory tree on the /nfsshare directory. [root@ z1 ~]# mkd i r -p /nfsshare/expo rts [root@ z1 ~]# mkd i r -p /nfsshare/expo rts/expo rt1 [root@ z1 ~]# mkd i r -p /nfsshare/expo rts/expo rt2 4. Place files in the expo rts directory for the NFS clients to access. For this example, we are creating test files named cl i entd atafi l e1 and cl i entd atafi l e2. [root@ z1 ~]# to uch /nfsshare/expo rts/expo rt1/cl i entd atafi l e1 [root@ z1 ~]# to uch /nfsshare/expo rts/expo rt2/cl i entd atafi l e2 5. Unmount the ext4 file system and deactivate the LVM volume group. [root@ z1 ~]# umo unt /d ev/my_vg /my_l v [root@ z1 ~]# vg chang e -an my_vg 3.4. Exclusive Act ivat ion of a Volume Group in a Clust er The following procedure configures the LVM volume group in a way that will ensure that only the cluster is capable of activating the volume group, and that the volume group will not be activated outside of the cluster on startup. If the volume group is activated by a system outside of the cluster, there is a risk of corrupting the volume group's metadata. This procedure modifies the vo l ume_l i st entry in the /etc/l vm/l vm. co nf configuration file. Volume groups listed in the vo l ume_l i st entry are allowed to automatically activate on the local node outside of the cluster manager's control. Volume groups related to the node's local root and home directories should be included in this list. All volume groups managed by the cluster manager must be excluded from the vo l ume_l i st entry. Note that this procedure does not require the use of cl vmd. Perform the following procedure on each node in the cluster. 1. D etermine which volume groups are currently configured on your local storage with the following command. This will output a list of the currently-configured volume groups. If you have space allocated in separate volume groups for root and for your home directory on this node, you will see those volumes in the output, as in this example. 16
21 Chapt er 3. An act ive/passive NFS Server in a Red Hat High Availabilit y Clust er # vg s --no head i ng s -o vg _name my_vg rhel_home rhel_root 2. Add the volume groups other than my_vg (the volume group you have just defined for the cluster) as entries to vo l ume_l i st in the /etc/l vm/l vm. co nf configuration file. For example, if you have space allocated in separate volume groups for root and for your home directory, you would uncomment the vo l ume_l i st line of the l vm. co nf file and add these volume groups as entries to vo l ume_l i st as follows: volume_list = [ "rhel_root", "rhel_home" ] Note If no local volume groups are present on a node to be activated outside of the cluster manager, you must still initialize the vo l ume_l i st entry as vo l ume_l i st = []. 3. Rebuild the i ni tramfs boot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Update the i ni tramfs device with the following command. This command may take up to a minute to complete. # d racut -H -f /bo o t/i ni tramfs-$(uname -r). i mg $(uname -r) 4. Reboot the node. Note If you have installed a new Linux kernel since booting the node on which you created the boot image, the new i ni trd image will be for the kernel that was running when you created it and not for the new kernel that is running when you reboot the node. You can ensure that the correct i ni trd device is in use by running the uname -r command before and after the reboot to determine the kernel release that is running. If the releases are not the same, update the i ni trd file after rebooting with the new kernel and then reboot the node. 5. When the node has rebooted, check whether the cluster services have started up again on that node by executing the pcs cl uster status command on that node. If this yields the message Erro r: cl uster i s no t currentl y runni ng o n thi s no d e then run the following command. # pcs cl uster start Alternately, you can wait until you have rebooted each node in the cluster and start cluster services on all of the nodes in the cluster with the following command. # pcs cl uster start --al l 3.5. Configuring t he Clust er Resources 17
22 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion 3.5. Configuring t he Clust er Resources This section provides the procedure for configuring the cluster resources for this use case. Note It is recommended that when you create a cluster resource with the pcs reso urce create, you execute the pcs status command immediately afterwards to verify that the resource is running. Note that if you have not configured a fencing device for your cluster, as described in Section 1.3, Fencing Configuration, by default the resources do not start. If you find that the resources you configured are not running, you can run the pcs reso urce d ebug -start resource command to test the resource configuration. This starts the service outside of the cluster s control and knowledge. At the point the configured resources are running again, run pcs cl uster cl eanup resource to make the cluster aware of the updates. For information on the pcs reso urce d ebug -start command, see the High Availability Add-On Reference manual. The following procedure configures the system resources. To ensure these resources all run on the same node, they are configured as part of the resource group nfsg ro up. The resources will start in the order in which you add them to the group, and they will stop in the reverse order in which they are added to the group. Run this procedure from one node of the cluster only. 1. The following command creates the LVM resource named my_l vm. This command specifies the excl usi ve= true parameter to ensure that only the cluster is capable of activating the LVM logical volume. Because the resource group nfsg ro up does not yet exist, this command creates the resource group. [root@ z1 ~]# pcs reso urce create my_l vm LVM vo l g rpname= my_vg \ excl usi ve= true --g ro up nfsg ro up Check the status of the cluster to verify that the resource is running. root@ z1 ~]# pcs status Cluster name: my_cluster Last updated: Thu Jan 8 11:13: Last change: Thu Jan 8 11:13: Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: a14efad 2 Nodes configured 3 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:lvm): Started z1.example.com PCSD Status: z1.example.com: Online z2.example.com: Online 18
23 Chapt er 3. An act ive/passive NFS Server in a Red Hat High Availabilit y Clust er Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled 2. Configure a Fi l esystem resource for the cluster. Note You can specify mount options as part of the resource configuration for a Fi l esystem resource with the o pti o ns= options parameter. Run the pcs reso urce d escri be Fi l esystem command for full configuration options. The following command configures an ext4 Fi l esystem resource named nfsshare as part of the nfsg ro up resource group. This file system uses the LVM volume group and ext4 file system you created in Section 3.2, Configuring an LVM Volume with an ext4 File System and will be mounted on the /nfsshare directory you created in Section 3.3, NFS Share Setup. [root@ z1 ~]# pcs reso urce create nfsshare Fi l esystem \ d evi ce= /d ev/my_vg /my_l v d i recto ry= /nfsshare \ fstype= ext4 --g ro up nfsg ro up Verify that the my_l vm and nfsshare resources are running. [root@ z1 ~]# pcs status... Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:lvm): Started z1.example.com nfsshare (ocf::heartbeat:filesystem): Started z1.example.com Create the nfsserver resource named nfs-d aemo n part of the resource group nfsg ro up. [root@ z1 ~]# pcs reso urce create nfs-d aemo n nfsserver \ nfs_shared _i nfo d i r= /nfsshare/nfsi nfo nfs_no _no ti fy= true \ --g ro up nfsg ro up [root@ z1 ~]# pcs status Add the expo rtfs resources to export the /nfsshare/expo rts directory. These resources are part of the resource group nfsg ro up. This builds a virtual directory for NFSv4 clients. NFSv3 clients can access these exports as well. [root@ z1 ~]# pcs reso urce create nfs-ro o t expo rtfs \ cl i entspec= / \ o pti o ns= rw,sync,no _ro o t_sq uash \ d i recto ry= /nfsshare/expo rts \ fsi d = 0 --g ro up nfsg ro up 19
24 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion [root@ z1 ~]# # pcs reso urce create nfs-expo rt1 expo rtfs \ cl i entspec= / \ o pti o ns= rw,sync,no _ro o t_sq uash d i recto ry= /nfsshare/expo rts/expo rt1 \ fsi d = 1 --g ro up nfsg ro up [root@ z1 ~]# # pcs reso urce create nfs-expo rt2 expo rtfs \ cl i entspec= / \ o pti o ns= rw,sync,no _ro o t_sq uash d i recto ry= /nfsshare/expo rts/expo rt2 \ fsi d = 2 --g ro up nfsg ro up 5. Add the floating IP address resource that nfs clients will use to acces the nfs share. The floating IP address that you specify requires a reverse DNS lookup or it must be specified in the /etc/ho sts on all nodes in the cluster. This resource is part of the resource group nfsg ro up. For this example deployment, we are using as the floating IP address. [root@ z1 ~]# pcs reso urce create nfs_i p IP ad d r2 \ i p= ci d r_netmask= 24 --g ro up nfsg ro up 6. Add an nfsno ti fy resource for sending NFSv3 reboot notifications once the entire NFS deployment has initialized. Note For the NFS notification to be processed correctly, the floating IP address must have a hostname associated with it that is consistent on both the nfs servers and the nfs client. [root@ z1 ~]# pcs reso urce create nfs-no ti fy nfsno ti fy \ so urce_ho st= After creating the resources and the resource constraints, you can check the status of the cluster. Note that all resources are running on the same node. [root@ z1 ~]# pcs status... Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:lvm): Started z1.example.com nfsshare (ocf::heartbeat:filesystem): Started z1.example.com nfs-daemon (ocf::heartbeat:nfsserver): Started z1.example.com nfs-root (ocf::heartbeat:exportfs): Started z1.example.com nfs-export1 (ocf::heartbeat:exportfs): Started z1.example.com nfs-export2 (ocf::heartbeat:exportfs): Started z1.example.com nfs_ip (ocf::heartbeat:ipaddr2): Started z1.example.com nfs-notify (ocf::heartbeat:nfsnotify): Started z1.example.com... 20
25 Chapt er 3. An act ive/passive NFS Server in a Red Hat High Availabilit y Clust er 3.6. T est ing t he Resource Configurat ion You can validate your system configuration with the following procedure. You should be able to mount the exported file system with either NFSv3 or NFSv4. 1. On a node outside of the cluster, residing in the same network as the deployment, verify that the NFS share can be seen by mounting the NFS share. For this example, we are using the /24 network. # sho wmo unt -e Export list for : /nfsshare/exports/export / /nfsshare/exports / /nfsshare/exports/export / To verify that you can mount the NFS share with NFSv4, mount the NFS share to a directory on the client node. After mounting, verify that the contents of the export directories are visible. Unmount the share after testing. # mkd i r nfsshare # mo unt -o "vers= 4 " : expo rt1 nfsshare # l s nfsshare clientdatafile1 # umo unt nfsshare 3. Verify that you can mount the NFS share with NFSv3. After mounting, verify that the test file cl i entd atafi l e1 is visible. Unlike NFSv4, since NFSV3 does not use the virtual file system, you must mount a specific export. Unmount the share after testing. # mkd i r nfsshare # mo unt -o "vers= 3" : /nfsshare/expo rts/expo rt2 nfsshare # l s nfsshare clientdatafile2 # umo unt nfsshare 4. To test for failover, perform the following steps. a. On a node outside of the cluster, mount the nfs share and verify access to the cl i entd atafi l e1 we created in Section 3.3, NFS Share Setup. # mkd i r nfsshare # mo unt -o "vers= 4 " : expo rt1 nfsshare # l s nfsshare clientdatafile1 b. From a node within the cluster, determine which node in the cluster is running nfsg ro up. In this example, nfsg ro up is running on z1. exampl e. co m. [root@ z1 ~]# pcs status... Full list of resources: 21
26 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:lvm): Started z1.example.com nfsshare (ocf::heartbeat:filesystem): Started z1.example.com nfs-daemon (ocf::heartbeat:nfsserver): Started z1.example.com nfs-root (ocf::heartbeat:exportfs): Started z1.example.com nfs-export1 (ocf::heartbeat:exportfs): Started z1.example.com nfs-export2 (ocf::heartbeat:exportfs): Started z1.example.com nfs_ip (ocf::heartbeat:ipaddr2): Started z1.example.com nfs-notify (ocf::heartbeat:nfsnotify): Started z1.example.com... c. From a node within the cluster, put the node that is running nfsg ro up in standby mode. [root@ z1 ~]#pcs cl uster stand by z1. exampl e. co m d. Verify that nfsg ro up successfully starts on the other cluster node. [root@ z1 ~]# pcs status... Full list of resources: Resource Group: nfsgroup my_lvm (ocf::heartbeat:lvm): Started z2.example.com nfsshare (ocf::heartbeat:filesystem): Started z2.example.com nfs-daemon (ocf::heartbeat:nfsserver): Started z2.example.com nfs-root (ocf::heartbeat:exportfs): Started z2.example.com nfs-export1 (ocf::heartbeat:exportfs): Started z2.example.com nfs-export2 (ocf::heartbeat:exportfs): Started z2.example.com nfs_ip (ocf::heartbeat:ipaddr2): Started z2.example.com nfs-notify (ocf::heartbeat:nfsnotify): Started z2.example.com... e. From the node outside the cluster on which you have mounted the nfs share, verify that this outside node still continues to have access to the test file within the NFS mount. 22
27 Chapt er 3. An act ive/passive NFS Server in a Red Hat High Availabilit y Clust er # l s nfsshare clientdatafile1 Service will be lost briefly for the client during the failover briefly but the client should recover in with no user intervention. By default, clients using NFSv4 may take up to 90 seconds to recover the mount; this 90 seconds represents the NFSv4 file lease grace period observed by the server on startup. NFSv3 clients should recover access to the mount in a matter of a few seconds. f. From a node within the cluster, remove the node that was initially running running nfsg ro up from standby mode. This will not in itself move the cluster resources back to this node. [root@ z1 ~]# pcs cl uster unstand by z1. exampl e. co m 23
28 Red Hat Ent erprise Linux 7 High Availabilit y Add- O n Administ rat ion Revision History Revision Mon Feb Steven Levine Version for 7.1 GA release Revision Thu Dec Steven Levine Version for 7.1 Beta release Revision Tue Dec Steven Levine Added nfs cluster configuration procedure Revision Mon Dec Steven Levine Updating load balancer cluster procedure. Revision Fri Dec Steven Levine Updating to implement new sort order on the Red Hat Enterprise Linux splash page. Revision Thu Dec Steven Levine Version for 7.1 Beta release Revision Mon Jun Steven Levine Version for 7.0 GA release Revision Wed May Steven Levine Resolves: # D ocument volume_list usage Revision Tue May Steven Levine Rebuild for style changes and updated draft Revision Wed Apr Steven Levine Updated Beta draft Revision Fri Dec Steven Levine Beta draft Revision Wed Jan Steven Levine First version for Red Hat Enterprise Linux 7 24
Red Hat Enterprise Linux 7 High Availability Add-On Administration. Configuring and Managing the High Availability Add-On
Red Hat Enterprise Linux 7 High Availability Add-On Administration Configuring and Managing the High Availability Add-On Red Hat Enterprise Linux 7 High Availability Add-On Administration Configuring
Red Hat Enterprise Linux 7 High Availability Add-On Overview
Red Hat Enterprise Linux 7 High Availability Add-On Overview Overview of the High Availability Add-On for Red Hat Enterprise Linux 7 Red Hat Engineering Content Services Red Hat Enterprise Linux 7 High
Red Hat CloudForms 3.2 NetApp Storage Integration Guide
Red Hat CloudForms 3.2 NetApp Storage Integration Guide Technology preview feature that enables you to collect NetApp Storage data using CloudForms Management Engine Red Hat CloudForms Documentation Team
Red Hat Cloud Infrastructure 5 Release Notes
Red Hat Cloud Infrastructure 5 Release Notes Release Notes for Red Hat Cloud Infrastructure 5.0 Red Hat Cloud Infrastructure Documentation Team Red Hat Cloud Infrastructure 5 Release Notes Release Notes
Red Hat Customer Portal 1 Managing User Access to the Red Hat Customer Portal and the Red Hat Network Application
Red Hat Customer Portal 1 Managing User Access to the Red Hat Customer Portal and the Red Hat Network Application Creating and Configuring User Accounts Edition 4 Red Hat Global Support Services Red Hat
Red Hat Enterprise Virtualization 3.6- Beta Java SDK Guide
Red Hat Enterprise Virtualization 3.6- Beta Java SDK Guide Using the Red Hat Enterprise Virtualization Java SDK Red Hat Enterprise Virtualization Documentation Team Red Hat Enterprise Virtualization 3.6-Beta
Red Hat CloudForms 3.1 Management Engine 5.3 OpenShift Enterprise Deployment Guide
Red Hat CloudForms 3.1 Management Engine 5.3 OpenShift Enterprise Deployment Guide Instructions for deploying OpenShift Enterprise with CloudForms Management Engine CloudForms Documentation Team Red Hat
Red Hat JBoss Core Services Apache HTTP Server 2.4 Apache HTTP Server Installation Guide
Red Hat JBoss Core Services Apache HTTP Server 2.4 Apache HTTP Server Installation Guide For use with Red Hat JBoss middleware products. Red Hat Customer Content Services Red Hat JBoss Core Services Apache
Red Hat Cloud Infrastructure 5 Introduction to Red Hat Cloud Infrastructure Architecture
Red Hat Cloud Infrastructure 5 Introduction to Red Hat Cloud Infrastructure Architecture Intelligently Installing Red Hat Cloud Infrastructure Red Hat Cloud Infrastructure Documentation Team Red Hat Cloud
Red Hat Enterprise Linux OpenStack Platform 7 OpenStack Data Processing
Red Hat Enterprise Linux OpenStack Platform 7 OpenStack Data Processing Manually provisioning and scaling Hadoop clusters in Red Hat OpenStack OpenStack Documentation Team Red Hat Enterprise Linux OpenStack
Red Hat Cloud Infrastructure 6 Getting Started with the Red Hat Cloud Infrastructure Unified Installer
Red Hat Cloud Infrastructure 6 Getting Started with the Red Hat Cloud Infrastructure Unified Installer Deploying the RHCI Unified Installer to Begin Deployment of a Private Infrastructure as a Service
Red Hat Subscription Management All Subscription Docs Quick Registration for RHEL
Red Hat Subscription Management All Subscription Docs Quick Registration for RHEL quickly register and subscribe Red Hat Enterprise Linux systems Edition 4 John Ha Deon Ballard Red Hat Subscription Management
Red Hat Enterprise Linux OpenStack Platform 7 Back Up and Restore Red Hat Enterprise Linux OpenStack Platform
Red Hat Enterprise Linux OpenStack Platform 7 Back Up and Restore Red Hat Enterprise Linux OpenStack Platform Backup and Restore the Director undercloud OpenStack Team Red Hat Enterprise Linux OpenStack
Parallels Virtuozzo Containers 4.7 for Linux
Parallels Virtuozzo Containers 4.7 for Linux Deploying Clusters in Parallels-Based Systems Copyright 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. Parallels Holdings, Ltd.
Highly Available NFS Storage with DRBD and Pacemaker
Highly Available NFS Storage with DRBD and Pacemaker SUSE Linux Enterprise High Availability Extension 12 Florian Haas, Tanja Roth, and Thomas Schraitle This document describes how to set up highly available
Red Hat Enterprise Linux 7 High Availability Add-On Overview
Red Hat Enterprise Linux 7 High Availability Add-On Overview Overview of the High Availability Add-On for Red Hat Enterprise Linux 7 Red Hat Engineering Content Services Red Hat Enterprise Linux 7 High
Red Hat JBoss BPM Suite 6.1 IBM WebSphere Installation and Configuration Guide
Red Hat JBoss BPM Suite 6.1 IBM WebSphere Installation and Configuration Guide For Red Hat JBoss BPM Suite Red Hat Content Services Red Hat JBoss BPM Suite 6.1 IBM WebSphere Installation and Configuration
JBoss Developer Studio 6.0
JBoss Developer Studio 6.0 OpenShift Tools Reference Guide 1 JBoss Developer Studio 6.0 OpenShift Tools Reference Guide Provides information about the use of the JBoss Developer Studio with the Red Hat
Implementing Failover Capabilities in Red Hat Network Satellite
Implementing Failover Capabilities in Red Hat Network Satellite By Vladimir Zlatkin Abstract This document will help you create two identical Red Hat Network (RHN) Satellites functioning in failover mode.
PVFS High Availability Clustering using Heartbeat 2.0
PVFS High Availability Clustering using Heartbeat 2.0 2008 Contents 1 Introduction 2 2 Requirements 2 2.1 Hardware................................................. 2 2.1.1 Nodes...............................................
Red Hat Enterprise Linux 6 Cluster Suite Overview. Overview of the High Availability Add- On for Red Hat Enterprise Linux 6
Red Hat Enterprise Linux 6 Cluster Suite Overview Overview of the High Availability Add- On for Red Hat Enterprise Linux 6 Cluster Suite Overview Red Hat Enterprise Linux 6 Cluster Suite Overview Overview
Red Hat System Administration 1(RH124) is Designed for IT Professionals who are new to Linux.
Red Hat Enterprise Linux 7- RH124 Red Hat System Administration I Red Hat System Administration 1(RH124) is Designed for IT Professionals who are new to Linux. This course will actively engage students
Red Hat OpenStack Platform 8 DNS-as-a-Service Guide
Red Hat OpenStack Platform 8 DNS-as-a-Service Guide Integrate DNS Management with Red Hat OpenStack Platform OpenStack Team Red Hat OpenStack Platform 8 DNS-as-a-Service Guide Integrate DNS Management
Red Hat JBoss Data Virtualization 6.1 Migration Guide
Red Hat JBoss Data Virtualization 6.1 Migration Guide This guide is for installation teams Red Hat Customer Content Services Red Hat JBoss Data Virtualization 6.1 Migration Guide This guide is for installation
Red Hat Enterprise Linux 6 Load Balancer Administration. Load Balancer Add-on for Red Hat Enterprise Linux Edition 6
Red Hat Enterprise Linux 6 Load Balancer Administration Load Balancer Add-on for Red Hat Enterprise Linux Edition 6 Red Hat Enterprise Linux 6 Load Balancer Administration Load Balancer Add-on for Red
Red Hat JBoss Fuse 6.1 Cloud Computing with Fabric
Red Hat JBoss Fuse 6.1 Cloud Computing with Fabric Centrally configure and provision assets in the cloud JBoss A-MQ Docs Team Red Hat JBoss Fuse 6.1 Cloud Computing with Fabric Centrally configure and
Configuring and Managing a Red Hat Cluster. Red Hat Cluster for Red Hat Enterprise Linux 5
Configuring and Managing a Red Hat Cluster Red Hat Cluster for Red Hat Enterprise Linux 5 Configuring and Managing a Red Hat Cluster: Red Hat Cluster for Red Hat Enterprise Linux 5 Copyright 2007 Red Hat,
Red Hat Ceph Storage 1.2.3 Hardware Guide
Red Hat Ceph Storage 1.2.3 Hardware Guide Hardware recommendations for Red Hat Ceph Storage v1.2.3. Red Hat Customer Content Services Red Hat Ceph Storage 1.2.3 Hardware Guide Hardware recommendations
Pacemaker Remote. Scaling High Availablity Clusters. David Vossel
Pacemaker Remote Scaling High Availablity Clusters David Vossel Pacemaker Remote Pacemaker Remote Scaling High Availablity Clusters Edition 4 Author David Vossel [email protected] Copyright 2009-2015
Red Hat Certifications: Red Hat Certified System Administrator (RHCSA)
Red Hat Certifications: Red Hat Certified System Administrator (RHCSA) Overview Red Hat is pleased to announce a new addition to its line of performance-based certifications Red Hat Certified System Administrator
Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems
RH413 Manage Software Updates Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems Allocate an advanced file system layout, and use file
ZCP trunk (build 50384) Zarafa Collaboration Platform. Zarafa HA Manual
ZCP trunk (build 50384) Zarafa Collaboration Platform Zarafa HA Manual Zarafa Collaboration Platform ZCP trunk (build 50384) Zarafa Collaboration Platform Zarafa HA Manual Edition 2.0 Copyright 2015 Zarafa
Red Hat Customer Portal Current Customer Portal Subscription Management
Red Hat Customer Portal Current Customer Portal Subscription Management for managing subscriptions Edition 1 Landmann Red Hat Customer Portal Current Customer Portal Subscription Management for managing
Red Hat Directory Server 8.2 Using the Directory Server Console
Red Hat Directory Server 8.2 Using the Directory Server Console Managing users and access within the Red Hat Directory Server 8.2 console Edition 8.2.1 Landmann Red Hat Directory Server 8.2 Using the Directory
OnCommand Performance Manager 1.1
OnCommand Performance Manager 1.1 Installation and Setup Guide For Red Hat Enterprise Linux NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501
Red Hat JBoss Developer Studio 8.0 Start Developing
Red Hat JBoss Developer Studio 8.0 Start Developing Tutorial for first time users Red Hat Customer Content Services Red Hat JBoss Developer Studio 8.0 Start Developing Tutorial for first time users Red
Red Hat Enterprise Linux 7 Virtualization Security Guide
Red Hat Enterprise Linux 7 Virtualization Security Guide Securing your virtual environment Scott Radvan Tahlia Richardson Thanks go to the following people for enabling the creation of this guide: Paul
Introduction to Highly Available NFS Server on scale out storage systems based on GlusterFS
Introduction to Highly Available NFS Server on scale out storage systems based on GlusterFS Soumya Koduri Red Hat Meghana Madhusudhan Red Hat AGENDA What is GlusterFS? Integration with NFS Ganesha Clustered
RHCSA 7RHCE Red Haf Linux Certification Practice
RHCSA 7RHCE Red Haf Linux Certification Practice Exams with Virtual Machines (Exams EX200 & EX300) "IcGraw-Hill is an independent entity from Red Hat, Inc., and is not affiliated with Red Hat, Inc. in
Red Hat Global File System for scale-out web services
Red Hat Global File System for scale-out web services by Subbu Krishnamurthy (Based on the projects by ATIX, Munich, Germany) Red Hat leads the way in delivering open source storage management for Linux
Windows Template Creation Guide. How to build your own Windows VM templates for deployment in Cloudturk.
Windows Template Creation Guide How to build your own Windows VM templates for deployment in Cloudturk. TABLE OF CONTENTS 1. Preparing the Server... 2 2. Installing Windows... 3 3. Creating a Template...
StarWind Virtual SAN Installing & Configuring a SQL Server 2012 Failover Cluster
#1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installing & Configuring a SQL Server 2012 Failover JANUARY 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the StarWind
Red Hat Enterprise Linux OpenStack Platform 6 Deploying OpenStack: Enterprise Environments (Red Hat Enterprise Linux OpenStack Platform Installer)
Red Hat Enterprise Linux OpenStack Platform 6 Deploying OpenStack: Enterprise Environments (Red Hat Enterprise Linux OpenStack Platform Installer) Deploying Red Hat Enterprise Linux OpenStack Platform
StarWind iscsi SAN Software: Using StarWind with VMware ESX Server
StarWind iscsi SAN Software: Using StarWind with VMware ESX Server www.starwindsoftware.com Copyright 2008-2010. All rights reserved. COPYRIGHT Copyright 2008-2010. All rights reserved. No part of this
Migrating LAMP stack from x86 to Power using the Server Consolidation Tool
Migrating LAMP stack from x86 to Power using the Server Consolidation Tool Naveen N. Rao Lucio J.H. Correia IBM Linux Technology Center November 2014 Version 3.0 1 of 24 Table of Contents 1.Introduction...3
An Oracle White Paper June 2010. How to Install and Configure a Two-Node Cluster
An Oracle White Paper June 2010 How to Install and Configure a Two-Node Cluster Table of Contents Introduction... 3 Two-Node Cluster: Overview... 4 Prerequisites, Assumptions, and Defaults... 4 Configuration
Cluster Configuration Manual Cluster configuration on Database Servers
Cluster configuration on Database Servers - 1 - Table of Contents 1. PREREQUISITES BEFORE SETTING UP CLUSTER... 3 2. INSTALLING CLUSTER PACKAGES... 3 3. CLUSTER CONFIGURATION... 4 3.1 CREATE NEW CONFIGURATION...
Red Hat Enterprise Linux 6.8 Managing Confined Services. Guide to configuring services under control of SELinux
Red Hat Enterprise Linux 6.8 Managing Confined Services Guide to configuring services under control of SELinux Robert Krátký Barbora Ančincová Red Hat Enterprise Linux 6.8 Managing Confined Services Guide
Networking Guide Redwood Manager 3.0 August 2013
Networking Guide Redwood Manager 3.0 August 2013 Table of Contents 1 Introduction... 3 1.1 IP Addresses... 3 1.1.1 Static vs. DHCP... 3 1.2 Required Ports... 4 2 Adding the Redwood Engine to the Network...
Red Hat Linux Administration II Installation, Configuration, Software and Troubleshooting
Course ID RHL200 Red Hat Linux Administration II Installation, Configuration, Software and Troubleshooting Course Description Students will experience added understanding of configuration issues of disks,
Red Hat Satellite 5.6 Proxy Installation Guide
Red Hat Satellite 5.6 Proxy Installation Guide Configuring, registering, and updating your Red Hat Enterprise Linux clients with Red Hat Satellite Proxy Server Edition 1 John Ha Lana Brindley Daniel Macpherson
Red Hat Cloud Infrastructure 4 Quick Start for Red Hat Enterprise Virtualization and Red Hat CloudForms
Red Hat Cloud Infrastructure 4 Quick Start for Red Hat Enterprise Virtualization and Red Hat CloudForms Installing Red Hat Cloud Infrastructure Red Hat Cloud Infrastructure Documentation Team Red Hat
RedHat (RHEL) System Administration Course Summary
Contact Us: (616) 875-4060 RedHat (RHEL) System Administration Course Summary Length: 5 Days Prerequisite: RedHat fundamentals course Recommendation Statement: Students should have some experience with
istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster
istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster Tuesday, December 26, 2013 KernSafe Technologies, Inc www.kernsafe.com Copyright KernSafe Technologies 2006-2013.All right reserved.
Red Hat Satellite 5.6 Client Configuration Guide
Red Hat Satellite 5.6 Client Configuration Guide Configuring, registering, and updating your Red Hat Enterprise Linux clients with Red Hat Satellite Edition 1 John Ha Lana Brindley Daniel Macpherson Athene
Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5. Version 1.0
Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5 Version 1.0 November 2008 Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5 1801 Varsity Drive Raleigh NC 27606-2072 USA Phone: +1 919 754
Fedora 19. Fedora Documentation Project
Fedora 19 Downloading and installing Fedora 19 on most desktop and laptop computers Fedora Documentation Project Copyright 2013 Red Hat, Inc. and others. The text of and illustrations in this document
Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality
Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality Introduction A History of Hard Drive Capacity Starting in 1984, when IBM first introduced a 5MB hard drive in
Red Hat Enterprise Virtualization 3.0 User Portal Guide. Accessing and Using Virtual Machines from the User Portal Edition 1
Red Hat Enterprise Virtualization 3.0 User Portal Guide Accessing and Using Virtual Machines from the User Portal Edition 1 Cheryn Tan David Jorm Red Hat Enterprise Virtualization 3.0 User Portal Guide
SteelEye Protection Suite for Linux v8.2.0. Network Attached Storage Recovery Kit Administration Guide
SteelEye Protection Suite for Linux v8.2.0 Network Attached Storage Recovery Kit Administration Guide October 2013 This document and the information herein is the property of SIOS Technology Corp. (previously
Parallels Cloud Server 6.0
Parallels Cloud Server 6.0 Parallels Cloud Storage Administrator's Guide March 06, 2015 Copyright 1999-2015 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings GmbH
GL254 - RED HAT ENTERPRISE LINUX SYSTEMS ADMINISTRATION III
QWERTYUIOP{ GL254 - RED HAT ENTERPRISE LINUX SYSTEMS ADMINISTRATION III This GL254 course is designed to follow an identical set of topics as the Red Hat RH254, RH255 RHCE exam prep courses with the added
Parallels Cloud Server 6.0
Parallels Cloud Server 6.0 Parallels Cloud Storage Administrator's Guide October 17, 2013 Copyright 1999-2013 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings GmbH
Parallels Virtuozzo Containers 4.7 for Linux Readme
Parallels Virtuozzo Containers 4.7 for Linux Readme This document provides the first-priority information about Parallels Virtuozzo Containers 4.7 for Linux and supplements the included documentation.
Overview: Clustering MySQL with DRBD & Pacemaker
Overview: Clustering MySQL with DRBD & Pacemaker Trent Lloyd 1 Overview Software Packages (OpenAIS, Pacemaker/CRM, DRBD) Concepts Setup & configuration Installing packages Configure
High Availability Storage
High Availability Storage High Availability Extensions Goldwyn Rodrigues High Availability Storage Engineer SUSE High Availability Extensions Highly available services for mission critical systems Integrated
SGI NAS. Quick Start Guide. 007-5865-001a
SGI NAS Quick Start Guide 007-5865-001a Copyright 2012 SGI. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to copy, distribute,
Parallels Cloud Server 6.0
Parallels Cloud Server 6.0 Parallels Cloud Storage Administrator's Guide August 18, 2015 Copyright 1999-2015 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings GmbH
Red Hat JBoss A-MQ 6.2 Cloud Deployment Guide
Red Hat JBoss A-MQ 6.2 Cloud Deployment Guide Centrally configure and provision assets in the cloud JBoss A-MQ Docs Team Red Hat JBoss A-MQ 6.2 Cloud Deployment Guide Centrally configure and provision
Installation Guide. Step-by-Step Guide for clustering Hyper-V virtual machines with Sanbolic s Kayo FS. Table of Contents
Distributed Data Center Virtualization using Windows Server 2008 Hyper-V and Failover Clustering beta release* *The clustered disk removal section will become obsolete once the solution ships in early
Deploying Remote Desktop Connection Broker with High Availability Step-by-Step Guide
Deploying Remote Desktop Connection Broker with High Availability Step-by-Step Guide Microsoft Corporation Published: May 2010 Abstract This guide describes the steps for configuring Remote Desktop Connection
Red Hat enterprise virtualization 3.0 feature comparison
Red Hat enterprise virtualization 3.0 feature comparison at a glance Red Hat Enterprise is the first fully open source, enterprise ready virtualization platform Compare the functionality of RHEV to VMware
istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering
istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering Tuesday, Feb 21 st, 2012 KernSafe Technologies, Inc. www.kernsafe.com Copyright KernSafe Technologies 2006-2012.
Backing up the Embedded Oracle database of a Red Hat Network Satellite
Backing up the Embedded Oracle database of a Red Hat Network Satellite By Melissa Goldin and Vladimir Zlatkin Abstract This document will help you create a backup of the Oracle database of a Red Hat Network
Cisco Prime Collaboration Deployment Troubleshooting
Cisco Prime Collaboration Deployment Troubleshooting Increase Disk Space for Migrations, page 1 General Troubleshooting Issues, page 2 Errors Seen in View Log, page 2 Lock Errors, page 6 NFS Datastores,
By the Citrix Publications Department. Citrix Systems, Inc.
Licensing: Setting Up the License Server on a Microsoft Cluster By the Citrix Publications Department Citrix Systems, Inc. Notice The information in this publication is subject to change without notice.
Pacemaker. A Scalable Cluster Resource Manager for Highly Available Services. Owen Le Blanc. I T Services University of Manchester
Pacemaker A Scalable Cluster Resource Manager for Highly Available Services Owen Le Blanc I T Services University of Manchester C V 1980, U of Manchester since 1985 CAI, CDC Cyber 170/730, Prime 9955 HP
How To Install Acronis Backup & Recovery 11.5 On A Linux Computer
Acronis Backup & Recovery 11.5 Server for Linux Update 2 Installation Guide Copyright Statement Copyright Acronis International GmbH, 2002-2013. All rights reserved. Acronis and Acronis Secure Zone are
Agenda Zabawę 2 czas zacząć Co to jest corosync? Co to jest pacemaker? Zastosowania Podsumowanie
Corosync i pacemaker w jednym stali domku - czyli jak pozwolić adminowi spać w nocy II Agenda Zabawę 2 czas zacząć Co to jest corosync? Co to jest pacemaker? Zastosowania Podsumowanie Maciej Miłaszewski
Acronis Storage Gateway
Acronis Storage Gateway DEPLOYMENT GUIDE Revision: 12/30/2015 Table of contents 1 Introducing Acronis Storage Gateway...3 1.1 Supported storage backends... 3 1.2 Architecture and network diagram... 4 1.3
Deploying Microsoft Clusters in Parallels Virtuozzo-Based Systems
Parallels Deploying Microsoft Clusters in Parallels Virtuozzo-Based Systems Copyright 1999-2008 Parallels, Inc. ISBN: N/A Parallels Holdings, Ltd. c/o Parallels Software, Inc. 13755 Sunrise Valley Drive
Red Hat Developer Toolset 1.1
Red Hat Developer Toolset 1.x 1.1 Release Notes 1 Red Hat Developer Toolset 1.1 1.1 Release Notes Release Notes for Red Hat Developer Toolset 1.1 Edition 1 Matt Newsome Red Hat, Inc [email protected]
Connection Broker Managing User Connections to Workstations, Blades, VDI, and more. Security Review
Connection Broker Managing User Connections to Workstations, Blades, VDI, and more Security Review Version 8.1 October 21, 2015 Contacting Leostream Leostream Corporation http://www.leostream.com 465 Waverley
ThinkServer RD540 and RD640 Operating System Installation Guide
ThinkServer RD540 and RD640 Operating System Installation Guide Note: Before using this information and the product it supports, be sure to read and understand the Read Me First and Safety, Warranty, and
Release Notes for Fuel and Fuel Web Version 3.0.1
Release Notes for Fuel and Fuel Web Version 3.0.1 June 21, 2013 1 Mirantis, Inc. is releasing version 3.0.1 of the Fuel Library and Fuel Web products. This is a cumulative maintenance release to the previously
How to Choose your Red Hat Enterprise Linux Filesystem
How to Choose your Red Hat Enterprise Linux Filesystem EXECUTIVE SUMMARY Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to
Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering
Multipathing I/O (MPIO) enables the use of multiple iscsi ports on a Drobo SAN to provide fault tolerance. MPIO can also boost performance of an application by load balancing traffic across multiple ports.
RH033 Red Hat Linux Essentials or equivalent experience with Red Hat Linux..
RH131 Red Hat Linux System Administration Course Summary For users of Linux (or UNIX) who want to start building skills in systems administration on Red Hat Linux, to a level where they can attach and
Guide to the LBaaS plugin ver. 1.0.2 for Fuel
Guide to the LBaaS plugin ver. 1.0.2 for Fuel Load Balancing plugin for Fuel LBaaS (Load Balancing as a Service) is currently an advanced service of Neutron that provides load balancing for Neutron multi
Acronis Backup & Recovery 11
Acronis Backup & Recovery 11 Update 0 Installation Guide Applies to the following editions: Advanced Server Virtual Edition Advanced Server SBS Edition Advanced Workstation Server for Linux Server for
Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide
Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide July 2010 1 Specifications are subject to change without notice. The Cloud.com logo, Cloud.com, Hypervisor Attached Storage, HAS, Hypervisor
Unbreakable Linux Network An Overview
An Oracle White Paper September 2011 Unbreakable Linux Network An Overview Introduction... 1 The Update Agent (yum)... 2 Channels Descriptions and Usage... 2 Switching from Red Hat Network (RHN) to ULN...
iscsi Quick-Connect Guide for Red Hat Linux
iscsi Quick-Connect Guide for Red Hat Linux A supplement for Network Administrators The Intel Networking Division Revision 1.0 March 2013 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH
Red Hat Enterprise Linux 6 Load Balancer Administration. Load Balancer Add-on for Red Hat Enterprise Linux Edition 6
Red Hat Enterprise Linux 6 Load Balancer Administration Load Balancer Add-on for Red Hat Enterprise Linux Edition 6 Red Hat Enterprise Linux 6 Load Balancer Administration Load Balancer Add-on for Red
Moving to Plesk Automation 11.5
Moving to Plesk Automation 11.5 Last updated: 2 June 2015 Contents About This Document 4 Introduction 5 Preparing for the Move 7 1. Install the PA Moving Tool... 8 2. Install Mail Sync Software (Windows
QuickStart Guide for Managing Computers. Version 9.2
QuickStart Guide for Managing Computers Version 9.2 JAMF Software, LLC 2013 JAMF Software, LLC. All rights reserved. JAMF Software has made all efforts to ensure that this guide is accurate. JAMF Software
Configuring Windows Server Clusters
Configuring Windows Server Clusters In Enterprise network, group of servers are often used to provide a common set of services. For example, Different physical computers can be used to answer request directed
