Configuring Windows Server Clusters



Similar documents
Network Load Balancing

Deploying Remote Desktop Connection Broker with High Availability Step-by-Step Guide

Deploying Windows Streaming Media Servers NLB Cluster and metasan

StarWind Virtual SAN Installing & Configuring a SQL Server 2012 Failover Cluster

CONFIGURING MNLB FOR LOAD BALANCING EXCHANGE 2013 CU2 CAS SERVERS FOR HIGH AVAILABILITY

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center

Owner of the content within this article is Written by Marc Grote

StarWind iscsi SAN Configuring HA File Server for SMB NAS

StarWind iscsi SAN: Configuring HA File Server for SMB NAS February 2012

Configuring Network Load Balancing with Cerberus FTP Server

Building a Scale-Out SQL Server 2008 Reporting Services Farm

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled

Installing and Configuring a. SQL Server 2012 Failover Cluster

StarWind iscsi SAN & NAS: Configuring HA File Server on Windows Server 2012 for SMB NAS January 2013

This How To guide will take you through configuring Network Load Balancing and deploying MOSS 2007 in SharePoint Farm.

Dell Compellent Storage Center

istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering

Implementing Moodle on a Windows High Availability Environment

StarWind iscsi SAN & NAS: Configuring HA Shared Storage for Scale- Out File Servers in Windows Server 2012 January 2013

MailMarshal SMTP in a Load Balanced Array of Servers Technical White Paper September 29, 2003

CREATING AN IKE IPSEC TUNNEL BETWEEN AN INTERNET SECURITY ROUTER AND A WINDOWS 2000/XP PC

StarWind iscsi SAN & NAS: Configuring HA Storage for Hyper-V October 2012

Step-By-Step Guide to Deploying Lync Server 2010 Enterprise Edition

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

Load Balancing Exchange 2007 Client Access Servers using Windows Network Load- Balancing Technology

Microsoft Windows Storage Server 2003 R2

WhatsUp Gold v16.3 Installation and Configuration Guide

Improving Application Performance, Scalability, and Availability using Microsoft Windows Server 2008 and NLB with Sanbolic Melio FS and SAN Storage

Appendix B Lab Setup Guide

How to Scale out SharePoint Server 2007 from a single server farm to a 3 server farm with Microsoft Network Load Balancing on the Web servers.

Load Balancing Exchange 2007 SP1 Hub Transport Servers using Windows Network Load Balancing Technology

Aqua Connect Load Balancer User Manual (Mac)

Configuring Load Balancing

F-Secure Messaging Security Gateway. Deployment Guide

Getting Started Guide

Step by step guide for installing highly available System Centre 2012 Virtual Machine Manager Management server:

Deploying Microsoft Clusters in Parallels Virtuozzo-Based Systems

Overview... 1 Requirements Installing Roles and Features Creating SQL Server Database... 9 Setting Security Logins...

Clustering ExtremeZ-IP 4.1

Windows Server 2012 Server Manager

Administrator s Guide

V Series Rapid Deployment Version 7.5

MOC 6435A Designing a Windows Server 2008 Network Infrastructure

Introduction to Hyper-V High- Availability with Failover Clustering

Installing and Using the vnios Trial

Microsoft Office Web Apps Server 2013 Integration with SharePoint 2013 Setting up Load Balanced Office Web Apps Farm with SSL (HTTPS)

MATLAB Distributed Computing Server with HPC Cluster in Microsoft Azure

Installation of MicroSoft Active Directory

Configuring Security Features of Session Recording

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade

SharePoint Server for Business Intelligence

Installing Windows Rights Management Services with Service Pack 2 Step-by- Step Guide

istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster

Building a Microsoft SQL Server Failover Cluster on the Interoute Virtual Data Centre

LOAD BALANCING 2X APPLICATIONSERVER XG SECURE CLIENT GATEWAYS THROUGH MICROSOFT NETWORK LOAD BALANCING

How To - Configure Virtual Host using FQDN How To Configure Virtual Host using FQDN

Direct Storage Access Using NetApp SnapDrive. Installation & Administration Guide

How To Create An Easybelle History Database On A Microsoft Powerbook (Windows)

Implementing Microsoft Windows Server Failover Clustering (WSFC) and SQL Server 2012 AlwaysOn Availability Groups in the AWS Cloud

User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream

Network Configuration Settings

StarWind iscsi SAN Software: Using StarWind with VMware ESX Server

Deploying System Center 2012 R2 Configuration Manager

> Technical Configuration Guide for Microsoft Network Load Balancing. Ethernet Switch and Ethernet Routing Switch Engineering

Using SMI-S for Management Automation of StarWind iscsi SAN V8 beta in System Center Virtual Machine Manager 2012 R2

How To Install And Configure Windows Server 2003 On A Student Computer

Fundamentals of Windows Server 2008 Network and Applications Infrastructure

Administering and Managing Failover Clustering

Server Installation Manual 4.4.1

NSi Mobile Installation Guide. Version 6.2

Ethernet Interface Manual Thermal / Label Printer. Rev Metapace T-1. Metapace T-2 Metapace L-1 Metapace L-2

Bosch Video Management System High Availability with Hyper-V

Virtualizing your Datacenter

Implementing and Managing Windows Server 2008 Clustering

Microsoft SQL Server Installation Guide

Setting Up SSL on IIS6 for MEGA Advisor

EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server Version 1

StarWind iscsi SAN Software: Using StarWind with MS Cluster on Windows Server 2003

Pre-lab and In-class Laboratory Exercise 10 (L10)

Cluster to Cluster Failover Using Double-Take

Dell High Availability Solutions Guide for Microsoft Hyper-V

Course: WIN310. Student Lab Setup Guide. Summer Microsoft Windows Server 2003 Network Infrastructure (70-291)

Veritas Cluster Server Database Agent for Microsoft SQL Configuration Guide

Using HP Systems Insight Manager to achieve high availability for Microsoft Team Foundation Server

Building a Highly Available and Scalable Web Farm

Using IIS Application Request Routing to Publish Lync Server 2013 Web Services

Use QNAP NAS for Backup

Administrator s Guide

Drobo How-To Guide. Topics. What You Will Need. Configure Windows iscsi Multipath I/O (MPIO) with Drobo iscsi SAN

How to connect your new virtual machine to the Internet

Install MS SQL Server 2012 Express Edition

Setting Up Scan to SMB on TaskALFA series MFP s.

Installing and Configuring a SQL Server 2014 Multi-Subnet Cluster on Windows Server 2012 R2

Configuring the WT-4 for ftp (Ad-hoc Mode)

Installing and Configuring vcenter Multi-Hypervisor Manager

Training Name Installing and Configuring Windows Server 2012

Transcription:

Configuring Windows Server Clusters In Enterprise network, group of servers are often used to provide a common set of services. For example, Different physical computers can be used to answer request directed at a common web site or database server. These servers group are often referred as Clusters. In Windows Server 2008, we can configure three types of server groups for load balancing, scalability and high availability. In this article we will discuss about the load balancing and high-availability server clusters we can configure in Windows Server 2008. We can configure three types of server groups in windows server 2008. Round-Robin Distribution Group Network Load Balancing (NLB) cluster Failover Cluster First, a Round-Robin distribution group is a group or set of computers that uses DNS to provide basic load balancing with minimal configuration requirements. It is very simple method for distributing a workload among multiple servers. In round-robin, a DNS server is configured with more than one record to resolve another server s name to an IP address. The purpose of DNS round robin is to load balance client request among servers. Its main advantage is that it is very easy to configure. Round-Robin DNS is enabled by default in most of the DNS servers, so you only need to create the appropriate records on the DNS server. However the biggest drawback is that if one of the servers goes down, the DNS servers does not respond to this event and will keep directing client request to this inactive server until administrators removes the DNS record from the DNS servers. Another disadvantage is that every record is given equal weightage, regardless of whether one target server is more powerful than another. Because of these serious limitations this method is not recommended to use in a large production network. And we will see next how network load balancing overcome these limitations. NETWORK LOAD BALANCING NLB or Network Load Balancing cluster is an installable feature of server 2008 which distributes client request among servers in an NLB cluster by using virtual IP address and a shared name. From the client perspective, NLB cluster appears to be a single server. In a common scenario, NLB is used to create a Web farm a group of computers to support a web site or a set of web sites. Also it can be used to create a terminal server farm, a VPN server farm, or an ISA server firewall cluster. But it s not suitable for the clusters where data changing occurs most, for example SQL database cluster, file server cluster. For this type of group of servers Microsoft has a cluster solution knows as failover cluster which we will see later in this article.

NLB provides some advantages over round-robin DNS method. First of all, NLB automatically detects servers that have been disconnected from the NLB cluster and then redistributes client requests to the remaining live hosts. This feature prevents clients from sending their request to failed server. Another difference is NLB have the option to specify a load percentage that each host will handle. CONFIGURING AN NLB CLUSTER In this section we will learn how to configure an NLB cluster. To demonstrate I have used following servers and role in my lab: Domain: abhi.local Domain Controller: DC01.abhi.local with an IP address 192.168.1.1 Member Servers: Node1.abhi.local with an IP address 192.168.1.15 Node2.abhi.local with an IP address 192.1681.16 Creating an NLB cluster is very simple process. To begin, on both nodes Node1.abhi.local and Node2.abhi.local configure the service or application (such as IIS) that it provide to clients. Please make sure to create identical configurations because you want the client experience to be identical regardless of which server users are connected to. For this lab purpose I have installed IIS and configured a default web site on both the nodes. The next step is to install Network Load Balancing feature on both the nodes since both node are going to join NLB cluster. To do so, perform following steps on both the nodes: Open Server Manager, and then click Add Features. In this wizard, select Network Load Balancing, click next and follow the prompts to install.

Once this install on both the nodes, the next step is to use Network Load Balancing Manager to configure the cluster. To configure, perform the following steps: Launch Network Load Balancing Manager from Administrative Tools or you can also open by typing Nlbmgr.exe from a command prompt. In the Network Load Balancing Manager console tree, right click Network Load Balancing Clusters and then click New Cluster. Connect to the host that is part of this new cluster. In this lab the node is Node1.abhi.local and Node2.abhi.local. I will add Node1 first and configure the properties then I will show you how to add another node in cluster. As you seen from above figure I have entered the Node1 and clicks connect, then I have to select the interface which we want to use with cluster. In this case it is Node IP address 192.168.1.15. The other interface is having the IP address of different subnet than local area network because that one will used for cluster communication, which we will see later during failover configuration.

On the Host Parameters page, select a value in priority. The host with lowest numerical priority among the current members handles all the clusters network traffic not covered by a port rule. For this lab I have given priority 1 to Node1. On the Cluster IP Address page, click Add to enter the cluster IP address shared by every host in the cluster. Please note that NLB doesn t support DHCP. NLB disables DHCP on each interface it configures, so the IP address must be static. Also note that the IP Address which we entered here is not the IP address of any servers/node, this IP address will represent the cluster IP address. Here in this lab I have given an IP address 192.168.1.20. Click next

On the Cluster Parameters page, in the Cluster IP Configuration area, verify appropriate values for IP address and subnet mask, and then type a fully qualified domain name for cluster. As you seen from above figure I have verified the IP address details and entered the FQDN of cluster as nlbcluster.abhi.local. This wizard generates a uniquely cluster MAC address which client used for serving request to group of servers. Here it s a very interesting thing, if you notice the IP generated MAC address, the last four bit is hexadecimal value of the given IP address. Also please note that FQDN is not needed when using NLB with Terminal Servers. Also from above figure, we have three options for Cluster Operation mode. In Unicast mode, the MAC address of the cluster is assigned to the network adapter of the computer, and the built-in MAC address of the network adapter is not used. It is recommended that you accept the unicast default settings. Click Next to continue. Now we will see the Node1 status from the NLB console, and it is in converged state, means Node1 is ready to do the cluster services and accept the directing request from client.

So the Node1 is successfully added in cluster. Let s add our second node Node2.abhi.local to cluster. To add more hosts to the cluster, right click the new cluster nlbcluster.abhi.local and then click Add Host To Cluster. Configure the host parameters (including host priority and dedicated IP address) for the Node2 following the same instructions that we used to configure Node1.abhi.local. Because we are adding hosts to an already configured cluster, all the cluster-wide parameters remain the same. Once this done, verify the status of Node2 from NLB manager console, as shown below: Ok, so our NLB cluster has been configured with two nodes Node1 and Node2, having Node1 as a high host priority 1 and Node2 host priority as 2. Now we will discuss Port Rules of network load balancing cluster. To do so right click newly configured cluster nlbluster.abhi.local and click Cluster properties. And go the Port Rules tab to view the port rule settings:

Currently the port rules defined the settings to accept request on all ports for the added nodes in cluster. Since in this lab, we have installed IIS on both the nodes to accept the directing web site request. So in this scenario, for a Web Services we need to enable port 80 for HTTP traffic. So we have to configure this port rule so that the new rule applies only to HTTP traffic. If you are using some other services for example terminal services you have to enable port 3389 to 3389 rule that applies only to RDP traffic. To enable port rule for web services, we need to do following: On the Port Rule page, click Edit to modify the default port rules and type 80 to 80. In the Protocols area, select TCP, as the specific TCP/IP protocol the port rule should cover. In the Filtering mode, select Multiple Host if you want multiple hosts in the cluster to handle network traffic for the port rule. Choose Single Host if you want a single host to handle the network traffic for the port rule. In this lab both of the nodes are configured to query for website request so I have selected Multiple Host. In Affinity (which applies only for the Multiple host filtering mode), I selected None since we want multiple connections from the same client IP address to be handled by different clusters hosts. Once done, click Ok to finish the port rule configuration, as shown in below figure.

CREATING A FAILOVER CLUSTER Creating a failover cluster is a multistep process. The first step is to configure the physical hardware for the cluster. Then, you need to install the Failover Clustering feature and run the Failover Cluster Validation tool, which ensures that hardware and software prerequisites are met for cluster. One of the first things you have to do before creating a cluster is to add storage that s because two or more node of will access the resource on same shared storage. For a two-node failover cluster. The storage contain at least two separate volumes (luns), configured at hard-ware level. The first volume will function as the witness disk, a volume that holds a copy of cluster configuration database. Witness disks, known as Quorum disks in Server 2003 are used in many but not all cluster configurations. The second volume will contain the files that are being shared to users.

In this lab we will create a two node failover cluster and validate the clustering feature and properties. For storage purpose I am going to use iscsi Starwind SAN. You can download this software from Starwind web site and convert your windows machine into a SAN. Before creating a failover cluster, you have to install Failover Clustering feature in both the nodes. I have already installed this feature on both the node. I am going to use the same node and same configured server which we used during our NLB cluster configuration. Once the feature has been installed, I am going to add the storage to Node. As I told you earlier I am going to use iscsi SAN software from startwind to configure clusters storage. Fortunately we have inbuilt feature with Windows Server 2008 called iscsi Initiator. To Launch iscsi Initiator, open all programs and go to administrative tool and click iscsi Initiator. Enter the target device IP address and click Quick Connect. Select the target which you have created and clicks connect. For this lab demonstrate I have created two targets for Witness and cluster as shown in below figure. The status is shown as inactive, once you click connect it will authenticate against your san device and connect the device for you and status will become active and you will find new disk entry in your disk management console.

Once you click Connect your discovered target status will change to active and new disk device will add to your server.

Once all this done, initialize and create your new disk and name them as per your environment need. Please note that you should not use Dynamic Disk in Failover Clustering, use Basic Disk. I have made both cluster shared disk online and named as below for this lab demonstration. These two disks will be shared among the nodes in our cluster. Also it is recommended that storage partition must be formatted using NTFS. Also the network you use for iscsi cannot be used for network communication. In all clustered servers, the network adapters you use to connect iscsi storage target should be identical. It is also recommended that you use Gigabit Ethernet or higher.

So our storage is ready, and we already installed Failover clustering feature in both the nodes. Now it s time to create and configure failover cluster. To do so, perform following: Here the first thing we have to do is to validate the clustering configuration. To run the Validate A Configuration Wizard, First open Failover Cluster Management from administrative tools. In Failover Cluster Management, click Validate A Configuration in the management area or the actions pane as shown in below figure:

Add the name of servers which you want to a part of clusters and click next. In this case we have our two nodes which we will add in our cluster group.

Select run all test in the following page and click next. Validate your settings and click next. And cluster will start validating your configuration. Once the configuration is successfully validated we are ready to go and create a failover cluster. Please make sure that when you go for production environment this validation report should be green checked. To View more report on this you can click View report.

Running the Create Cluster Wizard The next step in creating a cluster is to run the Create Cluster Wizard. This will install the software foundation for the cluster, converts the attached storage into cluster disks, and creates a computer account in Active Directory for the cluster. To create this, perform following: Launch the tool Failover Cluster Management, click Create A Cluster. In the Create Cluster Wizard, simply enter the names of the cluster nodes when prompted. This will enables you to name and assign an IP address for the cluster, after which the cluster is created. In the Access Point for Administrating the Cluster page enter the name of cluster and assign an IP address. This will create a computer account in AD and a host record A in DNS server

Click next on the confirmation page.

Now the wizard will start creating cluster. This wizard will validate cluster state on both the nodes.

Now the Cluster creation wizard completes, we need to configure the services or applications for which we need to provide failover. To perform this, we have to run the High Availability wizard. The High Availability wizard configures failover service for a particular service or application. To launch the High Availability wizard, in Failover Cluster Management, click Configure a Service or Application as shown below: On the Select Service Or Application page, select the service or application for which you want to provide failover service (high availability), and then click Next. To demonstrate this I choose File server to make a file server cluster in my lab network. (Make sure you have enabled file services role on both the node in order to create a file server failover cluster)

On the Client Access Point page, enter the name of cluster and IP address. Please note that IP address information that is not automatically supplied by your DHCP settings and it s a static IPv4 unique address for this clustered file server. In this lab I enter the name of my file server cluster as LabClusterFS with an IP address of 192.168.1.22. upon completion of all this wizard you will see a new computer account named LabClusterFS in Active Directory and IP address record in DNS.

On the Select Storage page, select any of the available cluster disks where you want to assign this service or application. As you remember for this lab demonstration I have created two clusters LUN and so I have two storage options for this page and I choose disk Cluster Data. On the Confirmation page click Next and wizard will start the process for creating a file server cluster based on the given configuration. After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report. To close the Wizard clicks Finish.

So we have successfully completed our File server failover cluster in our lab network. You will now have this service and application in Failover cluster management tool and the owner of this cluster is Node1.abhi.local. Now lets explore some more features and properties of Failover Cluster Manager tool before creating a shared resource on our file server cluster. Select the newly created cluster LabClusterFs and you can see the summary of this cluster in result pane window as shown in below figure. You can check the status of the cluster, the IP address its configured, the storage volume where our cluster application assigned, its preferred owner and available resource shared folder. Currently we have the default admin shared folder available for our File server cluster, once we configure more shared resource it will be available on this window.

On the Nodes menu we can see the total available node for our cluster and if we want to add more nodes in our cluster, simply we do by click Add Node in Action pane. On the Storage menu page, we can see total available cluster disks and the usage of cluster disk. We can see our cluster disk usage report here and which disk are in use and status of our cluster disk volume.

On the Networks menu we can view and check the status of our network adapters configured in our clustering. We can also check our cluster quorum network settings as shown in below figure:

Ok so we have explored the features and option of failover cluster. Now it s time to create a shared resource in our newly created File server cluster LabClusterFS. To do so perform following steps: Right click the service and application LabClusterFS and click Add a shared Folder. On Shared Folder Location page enter the path of your shared location and click Provision Storage, as shown below:

If you want to change the NTFS permission for this shared resource click Yes otherwise leave the default settings and click Next. Leave the default settings on Share Protocols page and click next. If you want to change it to NFS instead of SMB you can change here.

Check your settings how your shared folder is to user by client over SMB settings, then click next. On the SMB Permissions if you want to change the permission for shared folder change it otherwise leave it default and click next.

If you have DFS namespace in your network publish this in DFS namespace otherwise click Next. Review your configuration settings and if all is well for your network click Create to start the process.

You will get a confirmation with green check and your shared folder is successfully created for your file server cluster. Click Close. Now this newly created shared folder is available in services and application console window. VALIDATING FAILOVER CLUSTER Ok, so we have created our shared resource on file server cluster and its available for client use over SMP protocol. It s time to check our failover cluster server in case of any disaster happens. To demonstrate this I am going to move the shared resource on Node1.abhi.local to check the failover functional. Once we verified that our failover is functioning properly again I will move back the resource to Node2.abhi.local, and will shutdown this server and will check and verify the cluster service status. To do so, perform the following steps: Right click the clustered service, click Move This Service or Application To Another Node, and move the cluster service to Node1.abhi.local from Node2.abhi.local.

You can observer the status changes in the center pane of snap-in as the cluster services instance is moving. If the service moves successfully, the failover is function. So at this stage we verified that our failover cluster is working properly. The resource is moved to Node1.abhi.local. now lets check one more time. This time I am going to shutdown one node and will see the status of our cluster service. To demonstrate this I moved the resource back to Node2.abhi.local.so my resource is available on Node2. And I am going to shutdown the Node2.abhi.local. Ok I moved the resource and shutdown the machine. Now observer d the status changes in Failover Cluster Manager tool. You will see that status is in pending state and it automatically moving the resource to other available node.

Within moments you will see the resource has been successfully moved to other available node partner. In this lab it is Node1.abhi.local. So we have checked and verified our cluster settings are functional and ready to serve in case of any disaster failure. We also see when one server fails; another server takes over the failed server. In this article we have seen the demo about Network Load balancing Cluster and Failover Clustering features of Windows Server 2008.