Load Balancing and High availability using CTDB + DNS round robin



Similar documents
Clustered CIFS For Everybody Clustering Samba With CTDB. LinuxTag 2009

An Introduction To Gluster. Ryan Matteson

INUVIKA TECHNICAL GUIDE

Lustre SMB Gateway. Integrating Lustre with Windows

Introduction to Highly Available NFS Server on scale out storage systems based on GlusterFS

Linux Development Environment Description Based on VirtualBox Structure

Introduction to Gluster. Versions 3.0.x

Distributed File System Choices: Red Hat Storage, GFS2 & pnfs

NFS Ganesha and Clustered NAS on Distributed Storage System, GlusterFS. Soumya Koduri Meghana Madhusudhan Red Hat

Gluster Filesystem 3.3 Beta 2 Hadoop Compatible Storage

Scalable NAS Cluster With Samba And CTDB Source Talk Tage 2010

Connection Broker The Leader in Managing Hosted Desktop Infrastructures and Virtual Desktop Infrastructures (HDI and VDI) DNS Setup Guide

Open Source, Scale-out clustered NAS using nfs-ganesha and GlusterFS

ovirt and Gluster hyper-converged! HA solution for maximum resource utilization

Work No. 1 Samba. What is Samba?

ovirt and Gluster hyper-converged! HA solution for maximum resource utilization

Lab 3: VFILER Management and Commands

Active/Active HA Clustering on Shared Storage with Samba

Clustered NAS meets GPFS. Andrew Tridgell LTC ALRT Team

Clustering ExtremeZ-IP 4.1

CSE 265: System and Network Administration

PVFS High Availability Clustering using Heartbeat 2.0

GlusterFS Distributed Replicated Parallel File System


<Insert Picture Here> Managing Storage in Private Clouds with Oracle Cloud File System OOW 2011 presentation

Parallels Cloud Server 6.0

Quick Start - NetApp File Archiver

Intel Entry Storage System SS4200-E Active Directory Implementation and Troubleshooting

Samba on HP StorageWorks Enterprise File Services (EFS) Clustered File System Software

Syncplicity On-Premise Storage Connector

Parallels Cloud Server 6.0

ADConnect SSO over Network Load Balance Cluster

(june > this is version 3.025a)

Configuration of a Load-Balanced and Fail-Over Merak Cluster using Windows Server 2003 Network Load Balancing

Cisco Prime Collaboration Deployment Troubleshooting

Guide to the LBaaS plugin ver for Fuel

Active Directory integration with CloudByte ElastiStor

Create a virtual machine at your assigned virtual server. Use the following specs

Configuring Windows Server Clusters

Integrating Red Hat Enterprise Linux 6 with Microsoft Active Directory Presentation

Lab 5 Explicit Proxy Performance, Load Balancing & Redundancy

EMC ISILON SMARTCONNECT

Highly Available NFS over Oracle ASM Cluster File System (ACFS)

Large Scale Storage. Orlando Richards, Information Services LCFG Users Day, University of Edinburgh 18 th January 2013

Red Hat Storage Server Administration Deep Dive

Step by step guide for connecting PC to wired LAN at dormitories of University of Pardubice

EMC ViPR Controller. Service Catalog Reference Guide. Version 2.3 XXX-XXX-XXX 01

Scale and Availability Considerations for Cluster File Systems. David Noy, Symantec Corporation

RSA SecurID Ready Implementation Guide

Installing QuickBooks Enterprise Solutions Database Manager On Different Linux Servers

Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide

SCALABLE DATA SERVICES

Samba 4.2. Cebit 2015 Hannover

SO114 - Solaris 10 OE Network Administration

SAMBA SERVER (PDC) Samba is comprised of a suite of RPMs that come on the RHEL/Fedora CDs. The files are named:

SerNet. Samba Status Update. Linuxkongress Hamburg October 10, Volker Lendecke SerNet Samba Team. Network Service in a Service Network

GlobalSCAPE DMZ Gateway, v1. User Guide

Parallels Virtuozzo Containers 4.7 for Linux

# Windows Internet Name Serving Support Section: # WINS Support - Tells the NMBD component of Samba to enable its WINS Server ; wins support = no

Centralized Mac Home Directories On Windows Servers: Using Windows To Serve The Mac

CA ARCserve and CA XOsoft r12.5 Best Practices for protecting Microsoft SQL Server

Replicating VNXe3100/VNXe3150/VNXe3300 CIFS/NFS Shared Folders to VNX Technical Notes P/N h REV A01 Date June, 2011

Deploying Microsoft SharePoint Services with Stingray Traffic Manager DEPLOYMENT GUIDE

System Compatibility. Enhancements. Security. SonicWALL Security Appliance Release Notes

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled

Apache CloudStack 4.x (incubating) Network Setup: excerpt from Installation Guide. Revised February 28, :32 pm Pacific

Configuration Manual English version

Installing and Configuring a. SQL Server 2012 Failover Cluster

HP Load Balancing Module

VERITAS Cluster Server v2.0 Technical Overview

Guideline for setting up a functional VPN

Instructions for Adding a MacOS 10.4.x Server to ASURITE for File Sharing. Installation Section

ovirt self-hosted engine seamless deployment

HUAWEI OceanStor Load Balancing Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

Load Balancing Microsoft Sharepoint 2010 Load Balancing Microsoft Sharepoint Deployment Guide

CafePilot has 3 components: the Client, Server and Service Request Monitor (or SRM for short).

Data ONTAP 8.2. MultiStore Management Guide For 7-Mode. NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

Scalable Linux Clusters with LVS

Comodo MyDLP Software Version 2.0. Installation Guide Guide Version Comodo Security Solutions 1255 Broad Street Clifton, NJ 07013

Kaspersky Endpoint Security 8 for Linux INSTALLATION GUIDE

HP StorageWorks Enterprise File Services Clustered Gateway. Technical Product Review. Sprawl Makes Inexpensive Servers Expensive

Amazon EC 2 Cloud Deployment Guide

NAS 109 Using NAS with Linux

Samba. Samba. Samba 2.2.x. Limitations of Samba 2.2.x 1. Interoperating with Windows. Implements Microsoft s SMB protocol

CNS-205-1: Citrix NetScaler 10 Essentials and Networking

Reliable Replicated File Systems with GlusterFS

IIS, FTP Server and Windows

Quantum StorNext. Product Brief: Distributed LAN Client

Using Network Attached Storage with Linux. by Andy Pepperdine

Setting up VNC, SAMBA and SSH on Ubuntu Linux PCs Getting More Benefit out of Your Local Area Network

Parallels Cloud Server 6.0

Brocade Virtual Traffic Manager and Microsoft Skype for Business 2015 Deployment Guide

LinuxWorld Conference & Expo Server Farms and XML Web Services

CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level. -ORACLE TIMESTEN 11gR1

Transcription:

Introduction As you may already know, GlusterFS provides several methods for storage access from clients. However, only the native FUSE GlusterFS client has built-in failover and high availability features. A single NFS server would be handling all requests, distributing them to the different bricks as needed. This would create a bottleneck and a single point of failure (the NFS server itself), and both are not desirable in a scalable storage environment. There are ways around this using CTDB. Load Balancing and High availability using CTDB + DNS round robin We will be taking advantage of two different features to provide a highly available, scalable NFS and CIFS service. First, we will use DNS round robin to have each client use one of the Gluster servers for their mounts. Then, CTDB will provide virtual IPs and failover mechanisms to ensure that, in the case of a server failure, failover is transparent to clients. Let's assume the following Gluster server setup with four nodes: gluster1, with IP 192.168.122.100 gluster2, with IP 192.168.122.101 gluster3, with IP 192.168.122.102 gluster4, with IP 192.168.122.103 We will now assign four virtual IPs for the servers: 192.168.122.200 192.168.122.201 192.168.122.202 192.168.122.203 And then, define DNS entries for two load balanced services, called glusternfs and glustercifs. These services will be defined in the DNS zone with the following fragment: ; zone file fragment glusternfs 1 IN A 192.168.122.200 glusternfs 1 IN A 192.168.122.201 glusternfs 1 IN A 192.168.122.202 glusternfs 1 IN A 192.168.122.203 glustercifs 1 IN A 192.168.122.200 glustercifs 1 IN A 192.168.122.201 glustercifs 1 IN A 192.168.122.202

glustercifs 1 IN A 192.168.122.203 You will notice that we are using the virtual IPs here. When combined with CTDB IP failover, this allows us to have both load balancing and high availability. We are setting a low TTL for the records so, if a virtual IP is down while a client is trying to mount, the client can retry using a different one. With this architecture, load is properly balanced among the NFS / CIFS servers, provided that we have a similar access pattern from all clients. So now, let's move on to the actual CTDB implementation. CTDB configuration NOTE: If you try this with GlusterFS 3.3.x, you must disable SELinux manually, editing /etc/sysconfig/selinux. We will be creating a CTDB configuration for the architecture illustrated above. We start with a single volume called vol1, configured as distributed+replicated (2 replicas), and we are going to export it using NFS and CIFS. Keep in mind that this is not a good idea for a production environment, as the caching mechanisms of NFS and CIFS do not seem to behave well with each other when there are concurrent access to the same file from both protocols. This is not a RHS 2.0 limitation. First, mount the GlusterFS volume locally on each RHS server: # mount -t glusterfs gluster1:/vol1 /mnt/glustervol Each server will use its local hostname to mount. Of course, you will need to add this mount to /etc/fstab if you want it to persist across reboots. Now, we will create the Samba base configuration. Edit /etc/samba/smb.conf and add the following lines to the [global] section: clustering = yes idmap backend = tdb2 private dir = /mnt/glustervol/lock Then, add the CIFS export section, at the end of the document:

[glustertest] comment = For testing a Gluster volume exported through CIFS path = /mnt/glustervol read only = no guest ok = yes valid users = jpena The CIFS export is only allowing user jpena to access. Depending on your configuration, you will want to use Active Directory, and then you will need to add the required information to the [global] section. In this example, we will use a simpler security model. The first three lines instruct Samba to use CTDB for clustering. We are defining a private directory (/mnt/glustervol/lock), where the CTDB lock file will be stored. This must be a shared file system, and using GlusterFS for that seems to be the most natural choice. From one of the GlusterFS servers, we will create this directory and edit a few files there: /mnt/glustervol/lock/ctdb CTDB_RECOVERY_LOCK=/mnt/glustervol/lock/lockfile #CIFS only CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses CTDB_MANAGES_SAMBA=yes #CIFS only CTDB_NODES=/etc/ctdb/nodes /mnt/glustervol/lock/nodes This file will contain the IP addresses of the RHS servers. 192.168.122.100 192.168.122.101 192.168.122.102 192.168.122.103 /etc/ctdb/public_addresses This file will be created on each node, containing the virtual IP addresses. Since it is defined locally, we can create different groups of servers, each sharing a pool of virtual IP addresses, if needed. In this case, all nodes will have the same contents. 192.168.122.200/24 eth0 192.168.122.201/24 eth0

192.168.122.202/24 eth0 192.168.122.203/24 eth0 The first two files could also be created locally on each node, but it's easier just to have them shared. We will know create some symbolic links: # ln -s /mnt/glustervol/lock/ctdb /etc/sysconfig/ctdb # ln -s /mnt/glustervol/lock/nodes /etc/ctdb/nodes We are now done with the configuration. Since the ctdb service starts the samba service on its own, we need to disable the samba service startup: # chkconfig smbd off # chkconfig ctdb on # service ctdb start Testing and troubleshooting Once the ctdb service is started, it will take a while until the cluster is formed and ready for client access. You can monitor the status using the following commands: # ctdb status It will show the status for all cluster nodes. If a node is marked as OK, it is working fine. Otherwise, you will have to wait for a while # ctdb ip It will show which cluster node is using one of the virtual IP addresses. Beware: ifconfig will not show the virtual IP, but ip a will. The CTDB log file is located at /var/log/log.ctdb. Once the CTDB service is ready, we can test the configuration. For this use case, we will first add user jpena to the nodes # adduser <username>

and to the samba configuration on all nodes # smbpasswd -a <username> Then, it is just a matter of mounting the exported file systems using NFS and CIFS from a client: # sudo mount -t cifs -o user=<username> //glustercifs/glustervol /mnt/cifs # sudo mount glusternfs:/vol1 /mnt/nfs It is now possible to test failover. Find the node serving NFS/CIFS to your client, and turn it off. In some cases, it will take some time for the failover to happen, and you can check the process with the previously mentioned commands.