Shared Storage Setup with System Automation



Similar documents
Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality

Setup software RAID1 array on running CentOS 6.3 using mdadm. (Multiple Device Administrator) 1. Gather information about current system.

Typing some stupidities in text files, databases or whatever, where does it fit? why does it fit there, and how do you access there?

LifeKeeper for Linux. Software RAID (md) Recovery Kit v7.2 Administration Guide

How To Set Up Software Raid In Linux (Amd64)

Cloning Complex Linux Servers

Linux Software Raid. Aug Mark A. Davis

Apache Hadoop Storage Provisioning Using VMware vsphere Big Data Extensions TECHNICAL WHITE PAPER

IMPLEMENTATION GUIDE DECEMBER HGST Flash Pools Implementation Guide for MySQL Database

How you configure Iscsi target using starwind free Nas software & configure Iscsi initiator on Oracle Linux 6.4

LBNC and IBM Corporation Document: LBNC-Install.doc Date: Path: D:\Doc\EPFL\LNBC\LBNC-Install.doc Version: V1.0

The Logical Volume Manager (LVM)

LVM and Raid. Scott Gilliland

How To Manage Your Volume On Linux (Evms) On A Windows Box (Amd64) On A Raspberry Powerbook (Amd32) On An Ubuntu Box (Aes) On Linux

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

Abstract. Microsoft Corporation Published: August 2009

SAN Conceptual and Design Basics

Total Backup Recovery Server for Linux. User s Guide

Cluster Configuration Manual Cluster configuration on Database Servers

Parallels Virtuozzo Containers 4.7 for Linux

Availability Guide for Deploying SQL Server on VMware vsphere. August 2009

Storage Management for the Oracle Database on Red Hat Enterprise Linux 6: Using ASM With or Without ASMLib

Chip Coldwell Senior Software Engineer, Red Hat

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2

These application notes are intended to be a guide to implement features or extend the features of the Elastix IP PBX system.

Installation GENTOO + RAID sur VMWARE

RSA Security Analytics Virtual Appliance Setup Guide

How to Choose your Red Hat Enterprise Linux Filesystem

LVM2 data recovery. Milan Brož LinuxAlt 2009, Brno

VMware vsphere 5.0 Boot Camp

Windows Host Utilities Installation and Setup Guide

Unix System Administration

StarWind iscsi SAN Software Hands- On Review

FlexArray Virtualization

Redundant Array of Inexpensive/Independent Disks. RAID 0 Disk striping (best I/O performance, no redundancy!)

Extended installation documentation

HP Matrix Operating Environment 7.2 Recovery Management User Guide

VMware Virtual Machine File System: Technical Overview and Best Practices

Setup for Failover Clustering and Microsoft Cluster Service

Configuring Linux to Enable Multipath I/O

Acronis Backup & Recovery 11.5 Server for Linux. Update 2. User Guide

W H I T E P A P E R. Understanding VMware Consolidated Backup

SAN Implementation Course SANIW; 3 Days, Instructor-led

10 Red Hat Linux Tips and Tricks

SUSE Manager in the Public Cloud. SUSE Manager Server in the Public Cloud

Creating a Cray System Management Workstation (SMW) Bootable Backup Drive

Tivoli Storage Flashcopy Manager for Windows - Tips to implement retry capability to FCM offload backup. Cloud & Smarter Infrastructure IBM Japan

VMware vsphere 5.1 Advanced Administration

Acronis Backup & Recovery 11.5 Server for Linux. User Guide

Setup for Microsoft Cluster Service ESX Server and VirtualCenter 2.0.1

README.TXT

How To Fix A Fault Fault Fault Management In A Vsphere 5 Vsphe5 Vsphee5 V2.5.5 (Vmfs) Vspheron 5 (Vsphere5) (Vmf5) V

Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015)

Module 4 - Introduction to XenServer Storage Repositories

Advanced Server Virtualization: Vmware and Microsoft Platforms in the Virtual Data Center

Setup for Failover Clustering and Microsoft Cluster Service

Data storage, backup and restore

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

RocketRAID 2640/2642 SAS Controller Ubuntu Linux Installation Guide

EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server Version 1

Upgrading Cisco UCS Central

TimeIPS Server. IPS256T Virtual Machine. Installation Guide

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Adaptec ASR-7805/ASR-7805Q/ASR-71605/ASR-71605Q/ASR-71605E/ASR-71685/ASR SAS/SATA RAID Controllers AFM-700 Flash Backup Unit

HIGH AVAILABILITY STRATEGIES

RedHat (RHEL) System Administration Course Summary

Ubuntu bit + 64 bit Server Software RAID Recovery and Troubleshooting.

Backtrack 4 Bootable USB Thumb Drive with Full Disk Encryption

The Red Hat Cluster Suite NFS Cookbook

VMware vsphere Data Protection 6.0

Installing Debian with SATA based RAID

PA-5000 Series SSD Storage Options Configuring RAID and Disk Backup

What s New with VMware Virtual Infrastructure

Virtual Machine Backup Guide

WES 9.2 DRIVE CONFIGURATION WORKSHEET

Red Hat Enterprise Linux 7 High Availability Add-On Administration. Configuring and Managing the High Availability Add-On

PVFS High Availability Clustering using Heartbeat 2.0

Best Practices for Running Linux on Hyper-V

Windows Host Utilities 6.0 Installation and Setup Guide

Installing MooseFS Step by Step Tutorial

StorPool Distributed Storage Software Technical Overview

Setup for Failover Clustering and Microsoft Cluster Service

Resilient NAS by Design. Buffalo. A deployment guide to disaster recovery. Page 1

Using Symantec NetBackup with Symantec Security Information Manager 4.5

Welcome to the IBM Education Assistant module for Tivoli Storage Manager version 6.2 Hyper-V backups. hyper_v_backups.ppt.

StarWind iscsi SAN Software: Using StarWind with VMware ESX Server

Sanbolic s SAN Storage Enhancing Software Portfolio

Storage Area Network Configurations for RA8000/ESA12000 on Windows NT Intel

Setup for Failover Clustering and Microsoft Cluster Service

ENTERPRISE LINUX SYSTEM ADMINISTRATION

Red Hat Linux Administration II Installation, Configuration, Software and Troubleshooting

Guide to SATA Hard Disks Installation and RAID Configuration

Symantec Storage Foundation and High Availability Solutions 6.1 Virtualization Guide - Linux on ESXi

JovianDSS Evaluation and Product Training. Presentation updated: October 2015

SAP Solutions High Availability on SUSE Linux Enterprise Server for SAP Applications

Filesystems Performance in GNU/Linux Multi-Disk Data Storage

Transcription:

IBM Tivoli System Automation for Multiplatforms Authors: Markus Müller, Fabienne Schoeman, Andreas Schauberer, René Blath Date: 2013-07-26 Shared Storage Setup with System Automation Setup a shared disk on both nodes To transparently failover an application from one cluster node to another, all relevant program and runtime data must be located on a shared disk. A running application instance transparently accesses this data underneath a common mount point. Any node that potentially hosts the application must be able to mount the file system on the shared disk. Shared disk requirements Proper disk sharing by the nodes that can run the application consuming data on shared disk requires some precautions to prevent data corruption that might be caused by multiple nodes accessing the same disk. Control tasks System Automation for Multiplatforms starts, stops, and monitors the shared disk and makes its file system accessible to the application consuming the data on shared disk. These tasks are the mount, unmount, and test operations including the associated LVM operations in case the file systems are stored on logical volumes. Only System Automation is allowed to operate the shared disk and mount and unmount its file system. Make sure that no automount features of the operating system mount the file system on the disk during system boot or runtime. The file system of the shared disk must not be checked automatically after system reboot, for example on Linux, the /etc/fstab file must have a 0 (zero) set as the sixth field for the filesystem entry. Common mount point The same mount point must be used for the file system on the shared disk on all cluster nodes. No multiple mounts The file system on the shared disk may be mounted only to one node at a time. System Automation for Multiplatforms will enforce this rule to ensure file system integrity. The same applies to logical volume, volume groups and software raid block devices that must not be activated, resp. varied on on more than one system at a time. Data mirroring You may consider to have a data mirror between two storage systems for the shared disk to prevent data loss caused by power or hardware failures on one of the storage systems. The second disk storage system can be located in another room, building, or site.

Several techniques can provide disk mirroring functions. Hardware disk storage systems often provide raw device data mirroring to a second hardware disk storage system usually from the same vendor. When hardware mirroring features are available, they are usually the preferred choice. Another choice is the use of software mirroring techniques like RAID1 mirroring provided by the MD device support on Linux. Comprehensive documentation and examples for this feature can be found at https://raid.wiki.kernel.org/index.php/raid_setup. When the shared file system used for the application to be made highly-available is hosted on a Linux MD device, System Automation for Multiplatforms controls the MD devices as well while the file system is made available to the node where the consumer application is started. You have to ensure that the MD configuration file /etc/mdadm.conf describes the MD device on each node. If only one storage system is used to hold the shared disk, ensure that some RAID redundancy technique is used, preferably provided by the storage system itself, to store the data on the storage system. LVM setups If the shared file system resides on a LVM logical volume. The volume group containing the logical volume is implicitly automated by System Automation for Multiplatforms as well. Make sure that the shared disk holding the volume group contains only application specific data. Then other applications do not need to access the volume group. The associations between the shared file system, its logical volume, and volume group containers is harvested by System Automation for Multiplatforms automatically. When the shared disk is implemented as an MD device on Linux, this additional association is handled automatically too. System Automation starts and stops the underlying devices of the shared file system automatically in order to make the file system available or unavailable. IBM.AgFileSystem naming of the shared file system When System Automation for Multiplatforms harvests the disk storage on each cluster node, it assigns names to each file system found on the nodes. On Linux, these names are taken from the volume label, otherwise long serial number are assigned as the IBM.AgFileSystem name. You may want to set the filesystem volume name of the shared file system to something like app-fs to make life simpler for operators. On Linux, the volume name can be set by using the -L parameter of the tune2fs command. Example: To set and list the volume label on the logical volume vg0-lvolsco, enter the following commands: tune2fs -L app-fs /dev/mapper/vg0-lvolapp tune2fs -l /dev/mapper/vg0-lvolapp grep name If it is not possible to change the volume label or you would like to use another name, the XML file of the IBM Workload Deployer policy can be easily adjusted.

Examples of shared disk or file system setups Example 1: Shared disk setup in a VMWare ESX environment Configure VMWare ESX to map shared disks to the guest operating systems. Make sure that the guest operating system is stopped before you change the configuration. Shared disk configuration options: Compatibility Mode: virtual - independent/persistent Disks in persistent mode behave like conventional disks on a physical computer. All data written to a disk in persistent mode are written permanently to the disk. SCSI Bus sharing: physical Virtual disks can be shared between virtual machines on any server. Steps to setup shared disks in VMWare ESX: 1. Create at least one shared disk that is accessible from both systems in the SAN. 2. Attach the shared disk to the first guest by using the VMware configuration menu: Add Hardware -> Create disk -> Use mapped system LUN. Set the compatibility mode to Virtual - independent-persistent.

Figure 4. Attaching the shared disk to the first guest 3. Attach all shared disks to the second guest by using the VMWare ESX configuration menu: Add Hardware --> Create disk --> Use existing disk

Figure 5. Attaching all shared disks to the second guest 4. Set the SCSI controller that accesses the shared disk to Physical on both systems. Then the disk(s) can be shared between virtual machines on different ESX servers.

Figure 6. Setting the SCSI controller to Physical

Example 2: Shared disk setup on a Linux MD device Creating a data mirror using a Linux MD RAID1 device The following figure shows two servers, for example rack, blade, or stand-alone systems and two disk storage systems. Cluster node 1 Cluster node 2 /dev/sdb, /dev/sdc /dev/sdb, /dev/sdc disk storage system 1 disk storage system 2 Figure 7. Sample setup with two servers and two disk storage systems Both servers can access both disk storage systems at the same time by using cross cabling. Two raw disks /dev/sdb and /dev/sdc are used without disk partitions, for example /dev/sdb instead of /dev/sdb1. Steps to setup MD device /dev/md0: On both nodes: Inspect the file /etc/mdadm.conf to contain no entries "ARRAY /dev/md0 ". If such an entry already exists, check whether the entry is a leftover or a device /dev/md0 already exists in the system. If so, choose another device name, for example /dev/md1 for a new array and use this name in the following steps. #node1: Create a mirrored MD devices /dev/md0 mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb /dev/sdc Check that the MD device has been created mdadm --detail /dev/md0

Add the entry for /dev/md0 to /etc/mdadm.conf mdadm --detail --scan grep /dev/md0 >> /etc/mdadm.conf Stop the MD device before proceeding on node2 mdadm --stop /dev/md0 #node2: Assemble the array previously created on node1 mdadm --assemble /dev/md0 /dev/sdb /dev/sdc Check that the MD device has been assembled mdadm --detail /dev/md0 Add the entry for /dev/md0 to /etc/mdadm.conf mdadm --detail --scan grep /dev/md0 >> /etc/mdadm.conf Stop the MD device before further proceeding on node1 mdadm --stop /dev/md0 node1: Make the array available on node1 again: mdadm --assemble /dev/md0 /dev/sdb /dev/sdc Creating a Logical Volume Manager (LVM2) setup One physical volume, one volume group, and one logical volume is recommended for the high availability setup. More physical devices can be added later and the logical volume can be extended dynamically. The creation of physical volume, volume group, and logical volume will only be done on one node. CLVM (Cluster Aware LVM) is not supported with System Automation for Multiplatforms. LVM2 setup is recommended even without data mirroring. If no data mirroring is used, then replace /dev/md0 with the /dev/sd* device used. For more information refer to Related documentation. Steps to setup the volume group and logical volume: # node1: Make the MD device /dev/md0 a physical volume pvcreate /dev/md0 Check that the physical volume is created

pvscan pvdisplay Create the volume group vg0 vgcreate --clustered n vg0 /dev/md0 Check if the volume group is created vgscan vgdisplay vgs Create the logical volume lvolapp. For application using more than one filesystem, create follow the steps below multiple times and use the --size parameter instead of extents. In this example we use the complete available size of the volume group lvcreate --name lvolsco --extents 100%VG vg0 Check that the logical volume is created lvscan lvdisplay #node2 No special steps are needed at this time. The volume group and its contents will be imported later. Creating the file system A filesystem must be created on the new logical volume. Recommended file system types are ext3, ext4, or xfs. The file system type GFS2 is not supported with System Automation for Multiplatforms. You have to format the disk on one node only. When entering the file system into /etc/fstab on Linux, the flag noauto must be specified, so that file system is not mounted automatically. System Automation for Multiplatforms handles all required mounts. The file system entry in the file /etc/fstab must have the same content on both nodes. Steps to setup the file system: # node1: Create an ext3 filesystem on the logical volume and set the file system volume label to app-fs mkfs -t ext4 -L app-fs /dev/vg0/lvolapp Create the mount point for the file system mkdir /var/app Save a backup of /etc/fstab cp /etc/fstab /etc/fstab_orig Add a new entry into /etc/fstab echo "/dev/vg0/lvolapp /var/app ext4 rw,noauto 0 0" >> /etc/fstab

Mount the file system mount /var/app Check if the file system is successfully mounted mount # node2: Create the mount point for the file system mkdir /var/app Save a backup of /etc/fstab cp /etc/fstab /etc/fstab_orig Add a new entry into /etc/fstab echo "/dev/vg0/lvolsco /var/sco ext3 rw,noauto 0 0" >> /etc/fstab Refreshing the cluster after the shared disk setup The following paragraph applies to Linux only. For System Automation for Multiplatforms to harvest the newly created disk setup and file system, use refrsrc IBM.Disk first on node1 where application filesystem is mounted (!). List the filesystem resources with lsrsrc IBM.AgFileSystem to find out whether the harvesting process has finished and application filesystem and its underlying resources have already been found on node1. Then again run refrsrc IBM.Disk on node2. Again list the filesystem resources until there is a floating filesystem resource for the application. For that purpose use the command lsrsrc -s Resource==1 IBM.AgFileSystem

Troubleshooting hints StorageRM's default setting are in some cases not supported by SANs, so in general they should be changed Disk resources are by default reserved (SCSI-2) only DS4k disks support this right now, so turn it off for shared disks chrsrc -s Name like '%' & ResourceType=1 IBM.Disk DeviceLockMode=0 Then stop periodic harvesting chrsrc -c IBM.Disk HarvestInterval=0 Then make sure StorageRM harvesting has found the shared filesystem(s) with the following command: lsrsrc -s Name like '%' & ResourceType=1 IBM.AgFileSystem Name MountPoint SysMountPoint If the filesystem is not there, delete the current configuration (!) to force the StorageRM to repopulate its resource model with a harvest rmrsrc -s Name like '%' IBM.Disk and trigger a reharvest on all nodes starting with the node where the filesystem is currently mounted refrsrc IBM.Disk Then the filesystem should be there and can be added to a SA MP resource group. Important! Make sure you have package rsct.opt.storagerm-3.1.4.3-13192.ppc.rpm or later installed on every node. Make sure that shared filesystems are reflected in /etc/fstab of every node Make sure that shared filesystems have suitable mount options in /etc/fstab and are not marked as read-only After any unplugging tests, do not forget to reboot the system after recabling!