Intel RAID High Availability Solution for Red Hat * Linux Systems Best Practices White Paper Revision 1.0 January, 2014
Revision History Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Revision History Date Revision Number January, 2014 1.0 Initial release. Modifications ii Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Disclaimers Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to: http://www.intel.com/design/literature. Intel and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Copyright 2014 Intel Corporation. All rights reserved. Revision 1.0 Intel Confidential iii
Table of Contents Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Table of Contents 1. Introduction... 1 1.1 Background... 1 1.2 Requirements and Configuration Diagram... 1 2. Installation... 3 2.1 Download Release Package... 3 2.2 Install Release Package... 3 2.3 Install and Update the HA RAID Controllers... 5 2.4 Operating System Installation... 6 2.5 Network Configuration... 7 2.6 Installation of Intel HA RAID Drivers and Intel RAID Web Console 2... 9 2.7 Installation of HA Cluster Packages... 11 2.8 Firewall and SELinux Changes... 15 2.9 Configure the HA Cluster... 16 2.9.1 Create an HA Cluster... 16 2.9.2 Configure the Fence Devices... 17 2.9.3 Create a Failover Domain... 18 2.9.4 Configuring Cluster Resources... 19 2.9.5 Add a Cluster Service and Assign a Resource to the Service... 20 2.9.6 Create a Disk Quorum... 20 2.9.7 Propagate the Configuration File to the Cluster Nodes... 21 2.9.8 Start the CMAN Service... 21 Reference Documents...22 iv Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper List of Figures List of Figures Figure 1. Two Server and JBOD Configuration Diagram... 2 Figure 2. Enter the BIOS Boot Manager and EFI Shell... 3 Figure 3. Install the Release Package... 4 Figure 4. Install the RAID Controller HA Key... 5 Figure 5. Open the Network Configuration File... 7 Figure 6. Edit the Network Configuration File... 7 Figure 7. Install the Packages... 12 Figure 8. Confirm the Package Download Size... 13 Figure 9. Confirm the Package Signature Information... 13 Figure 10. Change the Software Firewall... 15 Figure 11. Change the SELinux Settings... 15 Figure 12. Verify IPMI Settings... 17 Figure 13. Create a Disk Quorum... 20 Figure 14. Start the cman Service... 21 Figure 15. Check the Cluster Configuration... 21 Revision 1.0 Intel Confidential v
List of Figures Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper < This page intentionally left blank. > vi Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Introduction 1. Introduction 1.1 Background The Intel RAID High Availability (HA) Solution is designed for small-to-medium businesses, remote/branch offices or private cloud environments that need a high availability configuration of a server with local storage. This configuration of the Intel RAID HA solution provides two servers, two hardware RAID controllers, and shared local storage with SAS disk drives. This solution takes advantage of Linux clustering technology to provide the high availability environment in a two-node cluster and takes advantage of the SAS interface to allow sharing of storage. The overall design goal is to provide the high availability solution with a simple installation and ease-of-use experience for the user or value-added-reseller. 1.2 Requirements and Configuration Diagram The Intel RAID HA solution requires the following: Two Intel servers (R2312 or equivalent) Two Intel hardware RAID Controllers (RS25SB008) plus the enablement key for each controller The AXXRPFKHA2 enablement key kit which contains 1 key for each controller External JBOD (JBOD2224S2DP) Supported SAS disk drives (SATA drives are not supported) SAS cables connecting the servers to the JBOD storage unit Red Hat* Enterprise Linux 6.3 or higher This solution fits into a typical infrastructure and includes server node 1, server node 2, and the External JBOD SAS Drive Enclosure as indicated in the diagram below. As with any server cluster solution, there needs to be a private LAN dedicated to cluster server communications in addition to the public LAN that communicates with client systems. The public network must provide Domain and Active Directory services, as well as DNS services to the failover cluster server. Revision 1.0 Intel Confidential 1
Introduction Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Figure 1. Two Server and JBOD Configuration Diagram 2 Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Installation 2. Installation The installation process is detailed below. Some of these steps are performed using the server BIOS/UEFI utilities before the operating system is installed and some are performed using the operating system tools after the operating system is installed. Important note: Because a server cluster will be created, some of the steps require updates to be applied to one server, and then to the second server before proceeding to the following step. 2.1 Download Release Package Use RAID FW, driver, RWC2, CLI, and SMI-S in IR3_2208_HA-DAS_release_package_1.0 drop or later version. The latest version is available on the Intel Support Website at: www.intel.com/support (Search High Availability Software ). A USB flash drive is needed to store the images from the release package, and will be used to install them via the server BIOS/UEFI setup tools. The release package is a single, zipped file. This file must be unzipped. Inside this package is a PDF document and five zipped files. Each of these five zipped files must be unzipped. The resulting five file folders and their contents are required to be copied to the USB flash drive. 2.2 Install Release Package 1. To install the package updates, enter the BIOS Boot Manager and then the EFI Shell on server node 1. Tip: Do not plug the USB flash drive with the updates into the server until after you are in the BIOS setup. Figure 2. Enter the BIOS Boot Manager and EFI Shell Revision 1.0 Intel Confidential 3
Installation Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper 2. Using the EFI Shell, navigate (using the cd command) to the folder named: ir3_2208-hadas_fwpkg-v23.6.0-0086 Figure 3. Install the Release Package 3. Run the shell script named: UPDATE.NSH After the package completes its download, restart the server and repeat for the second server. 4 Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Installation 2.3 Install and Update the HA RAID Controllers 1. To install the RAID controller HA key, shut down the server, remove the power cords, open the lid, and install the HA key onto the RAID controller. 2. Repeat this process for both servers. Figure 4. Install the RAID Controller HA Key 3. After each HA key has been installed on each of the RAID controllers in each of the servers, close the lid on the servers. Remove the USB flash drive containing the drivers. 4. Install the SAS disk drives into the JBOD storage unit and then power on the JBOD storage unit. Note: SATA disk drives are not supported with this solution. Supported SAS disk drives are dual-ported and are required for the HA RAID solution. 5. Connect the SAS cables between the servers and the external JBOD storage unit, as exactly shown in the Figure 1. Verify that a private LAN exists between the two servers. This private LAN connection will be used for the cluster heart beat and related cluster server communications. The public LAN must have a Windows Domain Controller and DNS server. The HA RAID cluster nodes should be neither a domain controller nor a DNS server. 6. After connecting the SAS cables, reconnect the power cords into the servers and power on the servers. Revision 1.0 Intel Confidential 5
Installation Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper 2.4 Operating System Installation The operating system can be installed to drives connected to the raid controller, or to drives connected to the server board ports. In this example, Red Hat* Enterprise Linux is used. Example #1: Install two hard disk drives or SSDs (SSDs are an excellent choice due to the faster boot that they provide) with at least 200GB capacity and connect them to the on-board SAS or SATA ports, and configure them as a mirror using either RSTe or ESRT2 server board raid options. For additional information refer to the documentation for the server system chosen for use. Example #2: Configure a virtual drive using drives connected the RS25SB008 RAID Controller, however, there must be enough capacity in the remaining drives connected to the raid controller to configure a quorum disk and volumes for shared failover. For additional information, see the RAID controller hardware guide. 1. Install Enterprise Linux from the DVD or using typical methods. In this example, the two servers are named HA01.SH and HA02.SH. The following basic configuration steps need to be completed for each server after the basic operating system installation. These are: a) Network Configuration b) Installation of Intel HA RAID Drivers and Intel RAID Web Console 2 c) Installation of HA Cluster Packages d) Firewall and SELinux Changes e) Configure the HA Cluster 2. Using the command-line prompt, login to the server as root and enter the root password. 6 Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Installation 2.5 Network Configuration In this example, the server is configured with a static IP address. Each server should be configured with its own static IP address. The steps to configure the network for the first server are shown below. 1. Enter the command: cd /etc/sysconfig/network-scripts/ 2. Edit the file corresponding to your newly configured network using vi or your preferred text editor. Figure 5. Open the Network Configuration File 3. Edit the ifcfg-eth0 file with proper IP address. 4. Set ONBOOT=yes and remove any unnecessary lines from this file and then save it. Figure 6. Edit the Network Configuration File Revision 1.0 Intel Confidential 7
Installation Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper 5. Repeat steps above to assign a different IP address for other node in the HA cluster. 6. Edit the /etc/hosts file on each node to provide the IP address and its hostname. For example: The IP address of HA01.SH is 192.168.144.201. The IP address of HA02.SH is 192.168.144.202. 7. The Network Manager is not supported on cluster nodes and should be removed or disabled. To disable it, run the following commands at the command prompt for both nodes: service NetworkManager stop chkconfig NetworkManager off 8. Reboot the both nodes to make network settings effective. 8 Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Installation 2.6 Installation of Intel HA RAID Drivers and Intel RAID Web Console 2 So that each node in the cluster that will be created can access the shared JBOD storage, the Intel ir3 2208 HA RAID drivers must be installed into each node of the cluster. If this driver has been loaded during Linux OS installation, the following steps 1 to 4 can be ignored. 1. Check whether other version ir3 2208 RAID driver has been installed on each node of cluster by running the following command: lsmod grep mega If megaraid_sas is listed, other version ir3 2208 RAID driver has been installed. 2. Run the following command to remove the other version ir3 2208 RAID driver if it has been installed on Linux: rmmod megaraid_sas 3. Run the following command to copy the ir3 2208 HA driver RPM package to any folder of Linux OS: rpm ivh [ir3 2208 HA driver rpm package] 4. Run the following command to insert new version driver module: modprobe megaraid_sas Revision 1.0 Intel Confidential 9
Installation Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper 5. Install the Intel RAID Web Console 2 on one node in the cluster. Then create the shared RAID volumes for cluster with this tool. For installation details, refer to the Intel RAID High Availability Storage User Guide at http://www.intel.com/support/motherboards/server/sb/cs-034613.htm. 10 Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Installation 2.7 Installation of HA Cluster Packages To install the Linux HA cluster system, the following packages are needed: rgmanager rgmanager is the Resource Group (Service) Manager. It controls starting, stopping, migrating, relocating, and recovering services with the HA cluster. lvm2-cluster The lvm2-cluster package contains support for Logical Volume Management (LVM) in a clustered environment. gfs2utils The gfs2utils package contains a number of utilities for creating, checking, modifying, and correcting any inconsistencies in global filesystem (GFS2) filesystems. ccs/luci/ricci The CCS (cluster configuration system) allows an administrator to create, modify, and view a cluster configuration file on a remote node through ricci or a local file. Luci is web frontend providing HA cluster suite management GUI. It is a required prerequisite to have ricci package deployed and started on respective nodes before the cluster can be managed with luci. cman The Cluster Manager (cman) handles communications between nodes in the cluster, determining when a node enters or leaves the cluster. To help to resolve the complex dependency problem, the yum (Yellow Dog Updater Modified) tool is often used to install the packages above. The packages are downloaded from collections called repositories, which may be on internet, on a local network, or on installation media. If your server has no internet access, or you cannot access to some public package repositories, local installation media can be a better choice. The following steps show how to install these packages from local media. 1. Verify that the yum tool has been installed on your Linux system. 2. Add a new file named local.repo in the folder /etc/yum.repos.d, with the following contents: [Server] name=server baseurl=file:///media/cdrom/server enable=1 gpgcheck=0 [HighAvailability] name=highavailability baseurl=file:///media/cdrom/highavailability enable=1 gpgcheck=0 [LoadBalancer] name=loadbalancer Revision 1.0 Intel Confidential 11
Installation Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper baseurl=file:///media/cdrom/loadbalancer enable=1 gpgcheck=0 [ScalableFileSystem] name=scalablefilesystem baseurl=file:///media/cdrom/scalablefilesystem enable=1 gpgcheck=0 [ResilientStorage] name=resilientstorage baseurl=file:///media/cdrom/resilientstorage enable=1 gpgcheck=0 3. Create a new folder CDROM in the folder /media and mount the Linux installation media to the folder /media/cdrom. 4. Use yum to install rgmanager, lvm2-cluster, and gfs2utils. Figure 7. Install the Packages 12 Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Installation 5. Answer y when prompted to confirm the package download size and when prompted to confirm the package signature information. Figure 8. Confirm the Package Download Size Figure 9. Confirm the Package Signature Information 6. (Optional) When the installation of the packages is complete, reply with: yum update Note: This step assumes that both nodes have been registered with Red Hat using the Red Hat Subscription Manager. Revision 1.0 Intel Confidential 13
Installation Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper 7. The ricci and cman packages will be installed on each node automatically when the rgmanager package is installed. The CCS can be installed on any node to manage the whole HA cluster. Install the CCS with command: yum install ccs 8. Start the HA cluster services and configure the server to start on boot with the following commands: service rgmanager start service ricci start 14 Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Installation 2.8 Firewall and SELinux Changes Firewall and SELinux settings should be updated on all nodes in an HA cluster to make sure that the network communication authority and security policies are proper. To support clustering, changes need to be made to the software firewall. These are described at the following link including the use of the service iptables and chkconfig iptables commands: https://access.redhat.com/site/documentation/en- US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/s2-iptables_firewall-CA.html To simplify the complexity, we turn off firewall and SELinux in this white paper. Figure 10. Change the Software Firewall Update the SELinux configuration with command: vi /etc/selinux/config. Set SELINUX=disabled in this file and save. Reboot the server to make the changes effective. Figure 11. Change the SELinux Settings Revision 1.0 Intel Confidential 15
Installation Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper 2.9 Configure the HA Cluster There are multiple methods to configure the HA cluster, such as using command line, directly editing configuration files, and using a GUI. Configuring an HA cluster occurs on a single node and is then pushed to the remiaining nodes in the cluster. 2.9.1 Create an HA Cluster Perform the following steps to create the HA cluster with CCS command line: 1. Specify your password for ricci with command: passwd ricci 2. Start the following services: service ricci start service rgmanager start 3. Create an HA cluster. Create the HA cluster with the following command: ccs -h host --createcluster clustername Host is the hostname of the node where the cluster.conf file will be created. Clustername can be defined by the user. For example: ccs h HA02.SH --createcluster mycluster The system will ask for the password that you have defined for ricci at step 1. 4. Add nodes to the cluster. Configure the nodes that the cluster contains by running the following command: ccs -h host --addnode node For example: ccs h HA02.SH --addnode HA02.SH ccs h HA02.SH --addnode HA01.SH 16 Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Installation 2.9.2 Configure the Fence Devices Fencing is the process of locking resources away from a node whose status is uncertain. There are a variety of fencing techniques available: Power fencing A fencing method that uses a power controller to power off an inoperable node. Fibre Channel switch fencing A fencing method that disables the Fibre Channel port that connects storage to an inoperable node. GNBD fencing A fencing method that disables an inoperable node's access to a GNBD server. IPMI fencing A fencing method that uses IPMI protocal and BMC to power off an inoperable node. The user should select proper fencing method according to actual needs. Intel server usually has BMC and supports IPMI spec 2.0. IPMI fencing has the advantage of a lower cost with no additional HW investment. This section uses IPMI fencing configuration as an example. 1. Set up the BMC IP address, account, and password for each node in the cluster. Refer to the Intel Server Management Guide (http://www.intel.com/support/motherboards/server/sb/cs-030462.htm) for how to set up the BMC properties. For example: Nodes Name BMC IP BMC Account Name BMC Account Password HA01.SH 192.168.144.21 admin password HA02.SH 192.168.144.32 admin password 2. Install IPMItool on each node with command yum install ipmitool. 3. Verify the BMC configuration. Run the following ipmitool command on node HA02.SH to check the system power status of HA01.SH. You cannot check the power status of HA02.SH on the same node. Figure 12. Verify IPMI Settings The BMC configuration is well if you can get its power status. 4. Add a fence method for the node, providing a name for the fence method with the following command: ccs -h host --addmethod method node Revision 1.0 Intel Confidential 17
Installation Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper For example, add a method fencemethod_ipmi for both nodes: CCS h HA02.SH addmethod fencemethid_ipmi HA01.SH CCS h HA02.SH addmethod fencemethid_ipmi HA02.SH 5. Add a fence device for the fence method with the following command: ccs -h host --addfencedev node1fence agent=fence_ipmilan ipaddr=ipmi_ip login=ipmi_user passwd=ipmi_passwd auth=md5 action=off For example, add the fence device node1fence and node2fence for node1 and node2: ccs h HA02.SH --addfencedev node1fence agent=fence_ipmilan ipaddr=192.168.144.21 login=admin passwd=password auth=md5 action=off ccs h HA02.SH --addfencedev node2fence agent=fence_ipmilan ipaddr=192.168.144.32 login=admin passwd=password auth=md5 action=off 6. Add a fence instance for the fence method with the following command: ccs -h host --addfenceinst fencedevicename node method For example: ccs -h HA02.SH --addfenceinst node1fence HA01.SH fencemethid_ipmi ccs -h HA02.SH --addfenceinst node2fence HA02.SH fencemethid_ipmi 2.9.3 Create a Failover Domain By default, all of the nodes can run any cluster service. A failover domain is a named subset of cluster nodes that are eligible to run a cluster service in the event of a node failure. A failover domain can have the following characteristics: Unrestricted Allows you to specify that a subset of members is preferred, but that a cluster service assigned to this domain can run on any available member. Restricted Allows you to restrict the members that can run a particular cluster service. If none of the members in a restricted failover domain are available, the cluster service cannot be started (either manually or by the cluster software). Unordered When a cluster service is assigned to an unordered failover domain, the member on which the cluster service runs is chosen from the available failover domain members with no priority ordering. Ordered Allows you to specify a preference order among the members of a failover domain. The member at the top of the list is the most preferred, followed by the second member in the list, and so on. 18 Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Installation Failback Allows you to specify whether a service in the failover domain should fail back to the node that it was originally running on before that node failed. Circumstances where a node repeatedly fails and is part of an ordered failover domain. To configure a failover domain, perform the following procedure: 1. Run the following command to add a failover domain: ccs -h host --addfailoverdomain name [restricted] [ordered] [nofailback] For example: ccs -h HA02.SH --addfailoverdomain example_01 ordered 2. Add a node to a failover domain with the following command: ccs -h host --addfailoverdomainnode failoverdomain node priority For example: ccs -h HA02.SH - addfailoverdomainnode example_01 HA01.SH 1 ccs -h HA02.SH - addfailoverdomainnode example_01 HA02.SH 2 2.9.4 Configuring Cluster Resources Shared resources can be shared directories or properties, such as the IP address, that are tied to the cluster. You can configure two types of resources: Global Resources that are available to any service in the cluster. Service-specific Resources that are available to only one service. To add a global cluster resource, execute the following command: ccs -h host --addresource resourcetype [resource options] For example, the following command adds a global file system resource to the cluster configuration file on HA02.SH. The name of the resource is example_fs, the file system device is /dev/sda1 (is a potation on shared storage volume), the file system mountpoint is /HA, and the file system type is ext3. ccs -h HA02.SH --addresource fs name=example_fs device=/dev/sda1 mountpoint=/ha fstype=ext3 Create an IP address cluster resource: ccs -h HA02.SH --addresource ip address=192.168.144.200 monitor_link=on sleeptime=1 Revision 1.0 Intel Confidential 19
Installation Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper 2.9.5 Add a Cluster Service and Assign a Resource to the Service Add a service to the cluster with the following command: ccs -h HA02.SH --addservice web-service domain=example_01 name=webservice recovery=relocate Here web-service is service name, and example_01 is cluster failover domain name. Assign resources to the service with command -addsubservice, for example: ccs -h HA02.SH --addsubservice web-service ip ref=192.168.144.200 ccs -h HA02.SH --addsubservice web-service clusterfs ref=example_fs 2.9.6 Create a Disk Quorum The disk quorum allows the cluster manager to determine which nodes in the cluster are dominant using a shared storage disk (block device) as the medium. A shared virtual drive of at least 10MB will need to be configured for the disk quorum device. Perform the following steps to make a disk quorum (qdisk): 1. Create or choose a small capacity Shared VD for the disk quorum (the example assumes a RAID 0 VD with mounting at /dev/sdb with mr_qdisk as the quorum disk name) by running the following command: mkqdisk -c /dev/sdb -l mr_qdisk Figure 13. Create a Disk Quorum 2. Check whether qdisk was created on both nodes by running the following command: mkqdisk -L 3. Use the following command to configure your system for using a quorum disk: ccs -h host --setquorumd [quorum d options] Refer to the cluster schema at /usr/share/cluster/cluster.rng for a complete list of quorum disk parameters. For example: ccs -h HA02.SH --setquorumd label= mr_qdisk 20 Intel Confidential Revision 1.0
Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper Installation 2.9.7 Propagate the Configuration File to the Cluster Nodes After you have created or edited a cluster configuration file on one of the nodes in the cluster, you need to propagate that same file to all of the cluster nodes and activate the configuration. For example, propagate the cluster.conf file to all nodes in the cluster: ccs -h HA02.SH --sync --activate 2.9.8 Start the CMAN Service Start the cman service from all nodes in the cluster with the following command: service cman start Figure 14. Start the cman Service Note: The Network Manager needs to be stopped using the service NetworkManager stop and chkconfig NetworkManager off commands. The cluster cannot be started until these two commands have been completed. Run command cman_tool nodes to list all nodes in the cluster. The cluster is now operational. Figure 15. Check the Cluster Configuration For additional information regarding the Intel RAID High Availability and Intel RAID, visit www.intel.com/go/raid. Revision 1.0 Intel Confidential 21
Reference Documents Intel RAID High Availability Solution for Red Hat* Linux Systems White Paper R2312 Server 1 RS25SB008 Intel RAID High Availability Storage User Guide AXXRPFKHA2 JBOD2224S2DP Intel Test Hardware and O.S. List Red Hat* Enterprise Linux High Availability Red Hat* Enterprise Linux 6 Cluster Administration SUSE* Linux Enterprise High Availability SUSE* Linux Enterprise High Availability Guide Reference Documents http://ark.intel.com/products/65399 http://www.intel.com/support/motherboards/server/sb/cs-034613.htm http://www.intel.com/content/www/us/en/servers/raid/raid-controllerrs25sb008.html http://ark.intel.com/products/76542/intel-raid-premium-feature-key- AXXRPFKHA2?q=Intel RAID Premium Feature Key AXXRPFKHA2 http://www.intel.com/content/www/us/en/server-systems/storage-systemjbod2224s2dp-front-image.html?wapkw=jbod2224s2dp http://www.intel.com/support/motherboards/server/rs25ab080/sb/cs- 033039.htm http://www.redhat.com/products/enterprise-linux-add-ons/high-availability/ https://access.redhat.com/site/documentation/en- US/Red_Hat_Enterprise_Linux/6/pdf/Cluster_Administration/Red_Hat_Ente rprise_linux-6-cluster_administration-en-us.pdf https://www.suse.com/products/highavailability/ https://www.suse.com/documentation/sle_ha/pdfdoc/book_sleha/book_sleh a.pdf 1. This server system is an example of a server that is compatible with this configuration; for a complete list of servers supported by this RAID controller see the RS25SB008 Tested Hardware and Operating System List above. 22 Intel Confidential Revision 1.0