ZCP trunk (build 50384) Zarafa Collaboration Platform. Zarafa HA Manual



Similar documents
Overview: Clustering MySQL with DRBD & Pacemaker

Highly Available NFS Storage with DRBD and Pacemaker

PRM and DRBD tutorial. Yves Trudeau October 2012

ZCP 7.0 (build 41322) Zarafa Collaboration Platform. Zarafa Archiver Deployment Guide

User Manual Web Meetings

HighlyavailableiSCSIstoragewith DRBDandPacemaker

Twin Peaks Software High Availability and Disaster Recovery Solution For Linux Server

High Availability with DRBD & Heartbeat. Chris Barber

WebApp S/MIME Manual. Release Zarafa BV

Pacemaker. A Scalable Cluster Resource Manager for Highly Available Services. Owen Le Blanc. I T Services University of Manchester

ZCP trunk (build 51762) Z-Admin Manual. The Z-Admin Manual

Setting Up A High-Availability Load Balancer (With Failover and Session Support) With Perlbal/Heartbeat On Debian Etch

Agenda Zabawę 2 czas zacząć Co to jest corosync? Co to jest pacemaker? Zastosowania Podsumowanie

Red Hat Enterprise Linux 7 High Availability Add-On Administration. Configuring and Managing the High Availability Add-On

Virtual Systems with qemu

OpenLDAP in High Availability Environments

ZCP trunk (build 24974) Zarafa Collaboration Platform. The Migration Manual

HIGH AVAILABILITY (HA) WITH OPENSIPS

High Availability with PostgreSQL and Pacemaker

High Availability Low Dollar Clustered Storage

Red Hat JBoss Core Services Apache HTTP Server 2.4 Apache HTTP Server Installation Guide

This howto is also a bit old now. But I thought of uploading it in the howtos section, as it still works.

Resource Manager Corosync/DRBD HA Installation Guide

Implementing the SUSE Linux Enterprise High Availability Extension on System z

Introduction to NetGUI

netkit lab single-host Università degli Studi Roma Tre Dipartimento di Informatica e Automazione Computer Networks Research Group

Workshop on Scientific Applications for the Internet of Things (IoT) March

VM-Series Firewall Deployment Tech Note PAN-OS 5.0

Linux TCP/IP Network Management

BASIC TCP/IP NETWORKING

bigbluebutton Open Source Web Conferencing

Parallels Virtuozzo Containers 4.7 for Linux

How To Install Openstack On Ubuntu (Amd64)

PARALLELS SERVER BARE METAL 5.0 README

ZCP trunk (build 50384) Zarafa Collaboration Platform. Feature List

Open-Xchange Server High Availability

Open Source High Availability Writing Resource Agents for your own services. Lars Marowsky-Brée Team Lead SUSE Labs

How to Restore a Linux Server Using Bare Metal Restore

Building Elastix-2.4 High Availability Clusters with DRBD and Heartbeat (using a single NIC)

Abstract. Microsoft Corporation Published: November 2011

Canopy Wireless Broadband Platform

How To Install Acronis Backup & Recovery 11.5 On A Linux Computer

High Availability Storage

VoIP Laboratory B How to re flash an IP04

Quick Start Guide for Parallels Virtuozzo

Configuring PPPoE. PPPoE server configuration

Parallels Server Bare Metal 5.0

Abstract. Microsoft Corporation Published: August 2009

SUSE Linux Enterprise High Availability Extension. Piotr Szewczuk konsultant

SAP HANA System Replication on SLES for SAP Applications. 11 SP3

3. The Domain Name Service

PARALLELS SERVER 4 BARE METAL README

This document describes the new features of this release and important changes since the previous one.

PVFS High Availability Clustering using Heartbeat 2.0

SAP HANA System Replication on SUSE Linux Enterprise Server for SAP Applications

Scalable Linux Clusters with LVS

SUSE Manager in the Public Cloud. SUSE Manager Server in the Public Cloud

McAfee Asset Manager Sensor

How you configure Iscsi target using starwind free Nas software & configure Iscsi initiator on Oracle Linux 6.4

Hadoop Basics with InfoSphere BigInsights

Linux Integration Services 3.4 for Hyper-V Readme

Implementing Failover Capabilities in Red Hat Network Satellite

Moving to Plesk Automation 11.5

SUSE Linux Enterprise Server

FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection-

Wireless LAN Apple Bonjour Deployment Guide

ZCP 7.1 (build 41322) Zarafa Collaboration Platform. The User Manual

Deploying Windows Streaming Media Servers NLB Cluster and metasan

Implementing SAN & NAS with Linux by Mark Manoukian & Roy Koh

NetIQ Sentinel Quick Start Guide

Aire-6 Acceso Inalámbrico a Redes IPV6. Christian Lazo R. Universidad Austral de Chile

Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment

Parallels Cloud Server 6.0

SIOS Protection Suite for Linux: MySQL with Data Replication. Evaluation Guide

Cluster Configuration Manual Cluster configuration on Database Servers

Quick Start Guide for VMware and Windows 7

Red Hat Enterprise Linux OpenStack Platform 7 OpenStack Data Processing

Managing Live Migrations

Red Hat Network Satellite High Availability Guide By Wayman Smith, Red Hat Consultant

High Availability Solutions for MySQL. Lenz Grimmer DrupalCon 2008, Szeged, Hungary

Acronis Backup & Recovery 10 Server for Linux. Quick Start Guide

Abstract. Microsoft Corporation Published: December 2009

Parallels Virtuozzo Containers 4.7 for Linux Readme

CTERA Agent for Linux

How To Set Up Software Raid In Linux (Amd64)

DocuShare 4, 5, and 6 in a Clustered Environment

Parallels Plesk Panel

Red Hat Subscription Management All Subscription Docs Quick Registration for RHEL

Highly Available NFS Storage with DRBD and Pacemaker

LAMP Quickstart for Red Hat Enterprise Linux 4

IBM Security QRadar SIEM Version High Availability Guide IBM

Parallels Cloud Server 6.0 Readme

Zen and The Art of High-Availability Clustering Lars Marowsky-Brée

Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment

Parallels Cloud Server 6.0

Install Cacti Network Monitoring Tool on CentOS 6.4 / RHEL 6.4 / Scientific Linux 6.4

Red Hat Enterprise Linux OpenStack Platform 7 Back Up and Restore Red Hat Enterprise Linux OpenStack Platform

Article. Well-distributed Linux Clusters, Virtual Guests and Nodes. Article Reprint

Virtualization & Cloud Computing (2W-VnCC)

EDB Failover Manager Guide. Failover Manager Version 2.0.3

Transcription:

ZCP trunk (build 50384) Zarafa Collaboration Platform Zarafa HA Manual

Zarafa Collaboration Platform ZCP trunk (build 50384) Zarafa Collaboration Platform Zarafa HA Manual Edition 2.0 Copyright 2015 Zarafa BV. The text of and illustrations in this document are licensed by Zarafa BV under a Creative Commons Attribution Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at the creativecommons.org website 4. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Linux is a registered trademark of Linus Torvalds in the United States and other countries. MySQL is a registered trademark of MySQL AB in the United States, the European Union and other countries. Red Hat, Red Hat Enterprise Linux, Fedora and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Ubuntu and Canonical are registered trademarks of Canonical Ltd. Debian is a registered trademark of Software in the Public Interest, Inc. SUSE and edirectory are registered trademarks of Novell, Inc. Microsoft Windows, Microsoft Office Outlook, Microsoft Exchange and Microsoft Active Directory are registered trademarks of Microsoft Corporation in the United States and/or other countries. The Trademark BlackBerry is owned by BlackBerry and is registered in the United States and may be pending or registered in other countries. Zarafa BV is not endorsed, sponsored, affiliated with or otherwise authorized by BlackBerry. All trademarks are the property of their respective owners. Disclaimer: Although all documentation is written and compiled with care, Zarafa is not responsible for direct actions or consequences derived from using this documentation, including unclear instructions or missing information not contained in these documents. The Zarafa Collaboration Platform (ZCP) combines the usability of Outlook with the stability and flexibility of a Linux server. It features a rich web-interface, the Zarafa WebAccess, and provides brilliant integration options with all sorts of clients including all most popular mobile platforms. Most components of ZCP are open source, licensed under the AGPLv3 1, can therefore be downloaded freely as ZCP's Community Edition 2. Several closed source components exist, most notably: 4 http://creativecommons.org/licenses/by-sa/3.0/ 1 http://www.gnu.org/licenses/agpl-3.0.html 2 http://www.zarafa.com/content/community

the Zarafa Windows Client providing Outlook integration, the Zarafa BES Integration providing Blackberry Enterprise Server connectivity, the Zarafa ADS Plugin providing Active Directory integration, and the Zarafa Backup Tools. These components, together with several advanced features for large setups and hosters, are only available in combination with a support contract as part of ZCP's Commercial Editions 3. Alternatively there is a wide selection of hosted ZCP offerings available worldwide. This Zarafa High Availability manual will show different HA setups with ZCP and will give a description how ZCP can be integrated in a Pacemaker/DRBD setup. 3 http://www.zarafa.com/content/editions

1. Introduction 1 1.1. High Availability example setups... 1 2. Installing 5 2.1. System Requirements... 5 2.2. Installation... 5 2.2.1. Network configuration... 5 2.2.2. Package installation... 6 3. Cluster configuration 7 3.1. Corosync configuration... 7 3.2. DRBD device initialization... 7 3.3. Pacemaker configuration... 9 4. Testing configuration 15 4.1. Testing a node failure... 15 4.2. Testing a resource failure... 15 5. Getting more information 17 v

vi

Chapter 1. Introduction The Zarafa Collaboration Platform (ZCP) is an open source software suite used at many organisations as a replacement for Microsoft Exchange. Nowadays the email system is one of the most critical systems within organisations, therefore making the email system high available is getting more important. The flexible architecture of ZCP offers different solutions to create a high available mail solution. This whitepaper will show some examples of high available Zarafa setups and contains a full description of configuring a High Availability setup with Pacemaker and DRBD. 1.1. High Availability example setups More and more organisation will virtualise their server environment to have a limit resource usage and have more flexibility. Most virtualization solutions like VMware Vsphere, Red Hat Enterprise Virtualization and Citrix Xen server will offer high availability as one of the standard features. The HA feature of the virtualization software can also be used for ZCP. When a hardware failure occurs the virtualization software will automatically start the virtual machine on one of the other virtualization hosts, see figure 1.1. Figure 1.1. Zarafa in a high available virtualization platform When an organisation doesn t have a HA virtualization solution or want to run ZCP on bare metal to have the best performance, ZCP can be integrated with different opensource cluster suite solutions, like Red Hat Cluster Suite or Pacemaker. Figure 1.2 shows a High Availability setup where both the MySQL database as the attachments are stored on a shared storage. In case of a failure of one of nodes the resources will be automatically started on the second node. When no shared storage is available the MySQL database and attachments can be stored on a DRBD device (Distributed Replicated Block Device), to have the data 1

Chapter 1. Introduction available on both nodes. In case of a node failure the DRBD device will be mounted on the second node and the Zarafa services will be automatically started, see figure 1.3. Figure 1.2. Zarafa in a high availablity setup with a shared storage Figure 1.3. Zarafa in a high availablity setup with DRBD 2

High Availability example setups Note When there is a high available virtualization solution, Zarafa recommendeds to use this solution for making the ZCP stack high available. 3

4

Installing Chapter 2. In the next chapters the installation and configuration of a High Availability setup with Pacemaker and DRBD is descripted. Pacemaker is a cluster resource manager which is included in most Linux distributions, like RHEL6, SLES11 and Ubuntu. Pacemaker will manage your cluster services by detecting and recovering from node and resource failures, by using the Heartbeat or Corosync messaging layer. 2.1. System Requirements In this whitepaper a two node cluster setup is created based on RHEL6. These system requirements should be solved, before proceding with this whitepaper: Two servers with RAID1 disk array for OS and RAID10 disk array for data storage Two network interfaces per machine 2.2. Installation Do on both machines a minimal RHEL6 installation. The RAID10 disk array for the database and attachment storage should not be configured in the installation wizard. 2.2.1. Network configuration In this whitepaper the two nodes will get the hostname bob and alice. The nodes will be connected with the first network interface to the LAN with subnet 192.168.122.0/24. The second network interface will be used for the DRBD replication. Servername bob alice eth0 192.168.122.25 192.168.122.26 eth1 10.0.0.25 10.0.0.26 Change the hostname of the nodes in /etc/sysconfig/network and configure the network interfaces in /etc/sysconfig/network-scripts/ifcfg-ethx. Add the following lines to the / etc/hosts file on both nodes. 192.168.122.25 bob 192.168.122.26 alice Restart the network services to activate the changes: service network restart Check if the network configuration is successfully configured by using ifconfig. [root@bob ~]# ifconfig eth0 Link encap:ethernet HWaddr 52:54:00:4C:30:83 inet addr:192.168.122.25 Bcast:192.168.122.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fe4c:3083/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:149 errors:0 dropped:0 overruns:0 frame:0 TX packets:65 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:12522 (12.2 KiB) TX bytes:8736 (8.5 KiB) 5

Chapter 2. Installing eth1 Link encap:ethernet HWaddr 52:54:00:5F:6F:33 inet addr:10.0.0.25 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fe5f:6f33/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:27 errors:0 dropped:0 overruns:0 frame:0 TX packets:29 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1242 (1.2 KiB) TX bytes:1530 (1.4 KiB) Interrupt:10 Base address:0x6000 2.2.2. Package installation After the network is successfully configured, install and configure ZCP like described in the Administrator Manual. By default the Zarafa services will be started on boot time. In a clustered setup the services will be automatically started by the cluster software, so the Zarafa services should be disabled at boot time. chkconfig mysqld off chkconfig zarafa-server off chkconfig zarafa-spooler off chkconfig zarafa-dagent off chkconfig zarafa-gateway off chkconfig zarafa-ical off chkconfig zarafa-licensed off chkconfig zarafa-monitor off Install the Pacemaker cluster software from the Red Hat yum repository. yum install pacemaker corosync Note To install the pacemaker software, please make sure you have a valid subscription for the Red Hat High Availability software channel. The DRBD software is not part of the standard Red Hat repositories. Download the officially supported DRBD RHEL6 packages from Linbit 1 or use the packages from Atrpms 2. Install the drbd packages and the correct drbd kernel module. To find out which kernel is in use, run uname -a. rpm -Uhv drbd-8.3.8.1-30.el6.x86_64.rpm drbdkmdl-2.6.32-71.18.1.el6.x86_64-8.3.8.1-30.el6.x86_64.rpm Enable Corosync and disable DRBD at boot time. chkconfig drbd off chkconfig corosync on 1 http://www.linbit.com/en/products-services/linbit-cluster-stack-support/support-options/support-packages-world 2 http://packages.atrpms.net/dist/el6/drbd 6

Chapter 3. Cluster configuration 3.1. Corosync configuration The communication between the different cluster nodes will be handled by the Corosync software. Execute the following steps to configure Corosync on both nodes: cd /etc/corosync cp corosync.conf.example corosync.conf Change the bindnetaddr in the corosync.conf to the local LAN subnet address. bindnetaddr: 10.0.0.0 To instruct CoroSync to start Pacemaker, create /etc/corosync/service.d/pcmk with the following fragment. service { # Load the Pacemaker Cluster Resource Manager name: pacemaker ver: 0 } Restart Corosync to activate the changes. service corosync restart 3.2. DRBD device initialization In order to have the MySQL database and the attachments on both nodes available, two DRBD devices will be created. Each DRBD device needs on both nodes a partition on the RAID10 device. Create on both nodes two partitions on the RAID10 device. In this whitepaper the RAID10 device is available as /dev/sdb. fdisk /dev/sdb Use the following steps to initialize the partitions.. Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-2031, default 1): Using default value 1 Last cylinder, +cylinders or +size{k,m,g} (1-2031, default 2031): 100G Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 7

Chapter 3. Cluster configuration First cylinder (501-2031, default 501): Using default value 501 Last cylinder, +cylinders or +size{k,m,g} (501-2031, default 2031): Using default value 2031 Command (m for help): w The partition table has been altered! The partitions can now used as DRBD devices. Add the following DRBD configuration to /etc/ drbd.conf on both nodes: global { usage-count no; } common { protocol C; syncer { rate 50M; } } resource mysql { on bob { device /dev/drbd0; disk /dev/sdb1; address 10.0.0.25:7788; meta-disk internal; } on alice { device /dev/drbd0; disk /dev/sdb1; address 10.0.0.26:7788; meta-disk internal; } } resource zarafa { on bob { device /dev/drbd1; disk /dev/sdb2; address 10.0.0.25:7799; meta-disk internal; } on alice { device /dev/drbd1; disk /dev/sdb2; address 10.0.0.26:7799; meta-disk internal; } } Reload DRBD on both nodes to activate the changes. service drbd reload Before the DRBD devices can be used, both resources has be initialized. Run the following commands on both nodes. [root@bob etc]# drbdadm create-md mysql Writing meta data... initializing activity log NOT initialized bitmap 8

Pacemaker configuration New drbd meta data block successfully created. drbdadm up mysql drbdadm create-md zarafa Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created. drbdadm up zarafa Check if the DRBD devices are successfully created, by using the following command: [root@bob etc]# cat /proc/drbd version: 8.3.8.1 (api:88/proto:86-94) GIT-hash: 0d8589fcc32c874df57c930ca1691399b55ec893 build by gardner@, 2011-02-23 08:32:21 0: cs:connected ro:secondary/secondary ds:inconsistent/inconsistent C r---- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:251924 1: cs:connected ro:secondary/secondary ds:inconsistent/inconsistent C r---- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:771564 The DRBD synchronisation can be start with the following command on bob. [root@bob ~]# drbdadm -- --overwrite-data-of-peer primary all To check the progress of the synchronisation, use cat /proc/drbd. [root@bob ~]# cat /proc/drbd version: 8.3.8.1 (api:88/proto:86-94) GIT-hash: 0d8589fcc32c874df57c930ca1691399b55ec893 build by gardner@, 2011-02-23 08:32:21 0: cs:syncsource ro:primary/secondary ds:uptodate/inconsistent C r---- ns:94336 nr:0 dw:0 dr:103160 al:0 bm:5 lo:2 pe:87 ua:256 ap:0 ep:1 wo:b oos:160340 [======>...] sync'ed: 37.1% (160340/251924)K finish: 0:00:29 speed: 5,328 (5,088) K/sec Both DRBD devices can now be formatted with a filesystem. [root@bob ~] mkfs.ext4 /dev/drbd0 [root@bob ~] mkfs.ext4 /dev/drbd1 3.3. Pacemaker configuration Before the actual cluster configuration can be done, the mysql and zarafa service will be assigned to a cluster ip-address. The cluster ip-addresses which are used in this example are: mysql 192.168.122.101 zarafa 192.168.122.100 Add to the file /etc/my.cnf the bind-address in the [mysqld] section. Make sure this change is done on both nodes. bind-address = 192.168.122.101 To let the Zarafa-server access the MySQL database, the privileges has to be set. 9

Chapter 3. Cluster configuration mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.122.0/255.255.255.0' IDENTIFIED BY 'secret'; mysql> FLUSH PRIVILEGES; Change in the file /etc/zarafa/server.cfg the server_bind to 192.168.122.100. server_bind = 192.168.122.100 The Zarafa-server will connect to the cluster ip-address of MySQL. Make sure the mysql_host in / etc/zarafa/server.cfg is correctly set. mysql_host = 192.168.122.101 The Zarafa-dagent should also listen on the Zarafa cluster ip-address, so the Postfix MTA s on both nodes can deliver emails. Change in the file /etc/zarafa/dagent.cfg the server_bind address to 192.168.122.100. server_bind = 192.168.122.100 Change in the file /etc/postfix/main.cf the virtual_transport to the cluster ip-address instead of localhost. The Postfix service itself, will not be part of the cluster services. virtual_transport = lmtp:192.168.122.10:2003 When the zarafa-gateway and zarafa-ical will be used, the server_socket of this processes should be changed. Change in /etc/zarafa/gateway.cfg and /etc/zarafa/ical.cfg the server_socket. server_socket = http://192.168.122.100:236/zarafa The Pacemaker cluster configuration can now be done. The Pacemaker cluster suite offers different tools to configure the cluster configuration. Some Linux distributions, like SLES11 include a graphical administration interface, but RHEL6 is not including this interface at the moment. Another tool for configuring the cluster is the CLI tool, called crm. This tool will be used to configure for this cluster setup and to manage both nodes and resources. More information about the crm cli can be found on http://www.clusterlabs.org/doc/crm_cli.html. First the cluster will be changed to disable automatic fencing and quorum support for this two node cluster. crm configure property stonith-enabled=false crm configure property no-quorum-policy=ignore The resources can now be configured. Two resource groups will be defined in this cluster, one group for MySQL and one for all Zarafa services. A resource group will contain the following steps: Make the DRBD resource primary Mount the DRBD device Assign cluster ip-address Start the services 10

Pacemaker configuration Execute the following commands to add the mysql resources. crm(live)# configure crm(live)# edit primitive drbd_mysql ocf:linbit:drbd \ params drbd_resource="mysql" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100" \ op monitor role=master interval=59s timeout=30s \ op monitor role=slave interval=60s timeout=30s primitive mysql_fs ocf:heartbeat:filesystem \ params device="/dev/drbd0" directory="/var/lib/mysql" fstype="ext4" options="noatime" \ op monitor interval="30s" primitive mysql_ip ocf:heartbeat:ipaddr2 \ params ip="192.168.122.101" cidr_netmask="32" nic="eth0" \ op monitor interval="30s" primitive mysqld lsb:mysqld \ op monitor interval="10" timeout="30" \ op start interval="0" timeout="120" \ op stop interval="0" timeout="120" group mysql mysql_fs mysql_ip mysqld ms ms_drbd_mysql drbd_mysql \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" colocation mysql_on_drbd inf: mysql ms_drbd_mysql:master order mysql_after_drbd inf: ms_drbd_mysql:promote mysql:start crm(live)# commit The mysql resources are now configured, to check the status of the resources use: crm(live)# status ============ Last updated: Sun Feb 27 22:42:20 2011 Stack: openais Current DC: alice - partition with quorum Version: 1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ bob alice ] Resource Group: mysql mysql_fs (ocf::heartbeat:filesystem): mysql_ip (ocf::heartbeat:ipaddr2): mysqld (lsb:mysqld): Started bob Master/Slave Set: ms_drbd_mysql Masters: [ bob ] Slaves: [ alice ] Started bob Started bob Now the Zarafa resource group can be added. crm(live)# configure crm(live)# edit primitive drbd_zarafa ocf:linbit:drbd \ params drbd_resource="zarafa" \ op monitor interval="60s" primitive zarafa_fs ocf:heartbeat:filesystem \ params device="/dev/drbd1" directory="/var/lib/zarafa" fstype="ext4" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100" \ op monitor role=master interval=59s timeout=30s \ op monitor role=slave interval=60s timeout=30s primitive zarafa_ip ocf:heartbeat:ipaddr2 \ 11

Chapter 3. Cluster configuration params ip="192.168.122.100" cidr_netmask="32" nic="eth0" \ op monitor interval="30s" primitive zarafa-server lsb:zarafa-server \ op monitor interval="30" timeout="60" primitive zarafa-dagent lsb:zarafa-dagent \ op monitor interval="30" timeout="30" primitive zarafa-gateway lsb:zarafa-gateway \ op monitor interval="30" timeout="30" primitive zarafa-ical lsb:zarafa-ical \ op monitor interval="30" timeout="30" primitive zarafa-licensed lsb:zarafa-licensed \ op monitor interval="30" timeout="30" primitive zarafa-monitor lsb:zarafa-monitor \ op monitor interval="30" timeout="30" primitive zarafa-spooler lsb:zarafa-spooler \ op monitor interval="30" timeout="30" group zarafa zarafa_fs zarafa_ip zarafa-server \ zarafa-spooler zarafa-dagent zarafa-licensed zarafa-monitor zarafa-gateway zarafa-ical ms ms_drbd_zarafa drbd_zarafa \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" colocation zarafa_on_drbd inf: zarafa ms_drbd_zarafa:master order zarafa_after_drbd inf: ms_drbd_zarafa:promote zarafa:start order zarafa_after_mysql inf: mysql:start zarafa:start crm(live)# commit To check the status of the cluster services use: crm(live)# status ============ Last updated: Mon Feb 28 08:31:32 2011 Stack: openais Current DC: bob - partition WITHOUT quorum Version: 1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe 2 Nodes configured, 2 expected votes 4 Resources configured. ============ Online: [ bob ] OFFLINE: [ alice ] Resource Group: mysql mysql_fs (ocf::heartbeat:filesystem): Started bob mysql_ip (ocf::heartbeat:ipaddr2): Started bob mysqld (lsb:mysqld): Started bob Master/Slave Set: ms_drbd_mysql Masters: [ bob ] Stopped: [ alice ] Resource Group: zarafa zarafa_fs (ocf::heartbeat:filesystem): Started bob zarafa_ip (ocf::heartbeat:ipaddr2): Started bob zarafa-server (lsb:zarafa-server): Started bob zarafa-spooler (lsb:zarafa-spooler): Started bob zarafa-dagent (lsb:zarafa-dagent): Started bob zarafa-monitor (lsb:zarafa-monitor): Started bob zarafa-licensed (lsb:zarafa-licensed): Started bob zarafa-gateway (lsb:zarafa-gateway): Started bob zarafa-ical (lsb:zarafa-ical): Started bob Master/Slave Set: ms_drbd_zarafa Masters: [ bob ] Stopped: [ alice ] The Apache webserver will be configured to run on both nodes, so a loadbalancer can be placed in front of the nodes. The Apache resource will check the status of the resource, by using the serverstatus page. 12

Pacemaker configuration The server-status should be enabled in the Apache configuration file. Uncomment the following lines in the file /etc/httpd/conf/httpd.conf. <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> Now the Apache resource can be added to cluster configuration. crm(live)# configure crm(live)# edit primitive apache ocf:heartbeat:apache \ params configfile="/etc/httpd/conf/httpd.conf" \ op monitor interval="60s" \ op start interval="0" timeout="40s" \ op stop interval="0" timeout="60s" clone apache_clone apache crm(live)# commit The Zarafa WebAccess should connect to the cluster ip-address of Zarafa, to be available on both nodes. Change the server_socket in /etc/zarafa/webaccess-ajax/config.php. define("default_server","http://192.168.122.100:236/zarafa"); Now the cluster configuration is ready and can be used. 13

14

Chapter 4. Testing configuration Before the cluster will be used for production use, it s important to the test different failover scenarios. The tool crm_mon will show the realtime status of the cluster. ============ Last updated: Mon Feb 28 18:41:16 2011 Stack: openais Current DC: bob - partition with quorum Version: 1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe 2 Nodes configured, 2 expected votes 5 Resources configured. ============ Online: [ bob alice ] Resource Group: mysql mysql_fs (ocf::heartbeat:filesystem): Started bob mysql_ip (ocf::heartbeat:ipaddr2): Started bob mysqld (lsb:mysqld): Started bob Master/Slave Set: ms_drbd_mysql Masters: [ bob ] Slaves: [ alice ] Resource Group: zarafa zarafa_fs (ocf::heartbeat:filesystem): Started bob zarafa_ip (ocf::heartbeat:ipaddr2): Started bob zarafa-server (lsb:zarafa-server): Started bob zarafa-spooler (lsb:zarafa-spooler): Started bob zarafa-dagent (lsb:zarafa-dagent): Started bob zarafa-monitor (lsb:zarafa-monitor): Started bob zarafa-licensed (lsb:zarafa-licensed): Started bob zarafa-gateway (lsb:zarafa-gateway): Started bob zarafa-ical (lsb:zarafa-ical): Started bob Master/Slave Set: ms_drbd_zarafa Masters: [ bob ] Slaves: [ alice ] Clone Set: apache_clone Started: [ bob alice ] 4.1. Testing a node failure Login to alice and start crm_mon Give bob a hard shutdown Check if all services will be successfully started on alice 4.2. Testing a resource failure Login to bob and start crm_mon Shutdown the Zarafa-server with killall -9 zarafa-server Check if the Zarafa-server is successfully restarted Try this test for different services. 15

16

Chapter 5. Getting more information The following links will give more useful information about DRBD, Pacemaker or the crm commandline tool. http://www.drbd.org/users-guide for all documentation about installing, configurating and trouble shooting DRBD http://www.clusterlabs.org/doc/crm_cli.html for a complete reference of all crm commandline interface http://www.clusterlabs.org for many different example setups and architecture of Pacemaker http://www.zarafa.com/doc for the ZCP Administrator Manual 17

18