High Availability Server Clustering Solutions



Similar documents
RAID Basics Training Guide

SAN Conceptual and Design Basics

Using High Availability Technologies Lesson 12

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

Intel RAID Controllers

Backup Exec 9.1 for Windows Servers. SAN Shared Storage Option

iscsi Quick-Connect Guide for Red Hat Linux

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

Protecting Microsoft SQL Server with an Integrated Dell / CommVault Solution. Database Solutions Engineering

Optimizing Large Arrays with StoneFly Storage Concentrators

Using Multi-Port Intel Ethernet Server Adapters to Optimize Server Virtualization

Business-centric Storage for small and medium-sized enterprises. How ETERNUS DX powered by Intel Xeon processors improves data management

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Storage Solutions Overview. Benefits of iscsi Implementation. Abstract

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms

Business-centric Storage for small and medium-sized enterprises. How ETERNUS DX powered by Intel Xeon processors improves data management

SCSI vs. Fibre Channel White Paper

Intel RAID SSD Cache Controller RCS25ZB040

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006

Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB _v02

Using Multipathing Technology to Achieve a High Availability Solution

Load Balancing and Clustering in EPiServer

Redundant Array of Independent Disks (RAID)

Red Hat Enterprise linux 5 Continuous Availability

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

Intel Rapid Storage Technology

PCI Express* Ethernet Networking

Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide

Windows Server 2008 R2 Hyper-V Live Migration

Configuring RAID for Optimal Performance

RAID technology and IBM TotalStorage NAS products

Intel Matrix Storage Manager 8.x

Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010

Integrated Application and Data Protection. NEC ExpressCluster White Paper

HP reference configuration for entry-level SAS Grid Manager solutions

Benefits of Intel Matrix Storage Technology

Real-time Protection for Hyper-V

Vendor Update Intel 49 th IDC HPC User Forum. Mike Lafferty HPC Marketing Intel Americas Corp.

HP StorageWorks Data Protection Strategy brief

IBM ^ xseries ServeRAID Technology

Parallels. Clustering in Virtuozzo-Based Systems

Clustering in Parallels Virtuozzo-Based Systems

The Methodology Behind the Dell SQL Server Advisor Tool

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

Intel Storage System SSR212CC Enclosure Management Software Installation Guide For Red Hat* Enterprise Linux

TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Deploying Global Clusters for Site Disaster Recovery via Symantec Storage Foundation on Infortrend Systems

Oracle Database Disaster Recovery Using Dell Storage Replication Solutions

iscsi: Accelerating the Transition to Network Storage

Introduction to PCI Express Positioning Information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

Intel RAID Volume Recovery Procedures

EMC PowerPath Family

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

Serial ATA in Servers and Networked Storage

High-Performance Computing Clusters

Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

FICON Extended Distance Solution (FEDS)

Taking Virtualization

Protecting SQL Server in Physical And Virtual Environments

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

Developing Higher Density Solutions with Dialogic Host Media Processing Software

Solution Recipe: Improve PC Security and Reliability with Intel Virtualization Technology

High Availability with Postgres Plus Advanced Server. An EnterpriseDB White Paper

How To Live Migrate In Hyperv On Windows Server 22 (Windows) (Windows V) (Hyperv) (Powerpoint) (For A Hyperv Virtual Machine) (Virtual Machine) And (Hyper V) Vhd (Virtual Hard Disk

Chapter 13 Selected Storage Systems and Interface

Extended Distance SAN with MC/ServiceGuard Opens New Disaster Recovery Opportunities

Fault Tolerant Servers: The Choice for Continuous Availability on Microsoft Windows Server Platform

VERITAS Backup Exec 9.0 for Windows Servers

Load Balancing and Clustering in EPiDesk

Getting Started with Endurance FTvirtual Server

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Long-Distance Configurations for MSCS with IBM Enterprise Storage Server

Online Transaction Processing in SQL Server 2008

Network Design. Yiannos Mylonas

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

Seradex White Paper. Focus on these points for optimizing the performance of a Seradex ERP SQL database:

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

Hardware RAID vs. Software RAID: Which Implementation is Best for my Application?

Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server

What is RAID--BASICS? Mylex RAID Primer. A simple guide to understanding RAID

How High Temperature Data Centers and Intel Technologies Decrease Operating Costs

Total Disaster Recovery in Clustered Storage Servers

How To Increase Network Performance With Segmentation

Intel Network Builders: Lanner and Intel Building the Best Network Security Platforms

Transcription:

White Paper High vailability Server Clustering Solutions Extending the benefits of technology into the server arena Intel in Communications

Contents Executive Summary 3 Extending Protection to Storage Servers 3 Server Mirroring 3 Server Clustering 3 Clustering Technologies 4 How Clusters Work 4 Current Connections Types 4 Deployment Methods 5 SN pplications 7 Conclusion 8 For more information 8

High vailability Server Clustering Solutions White Paper Executive Summary In today s information technology world, high availability is more and more important in critical systems such as networked storage. Demands on these systems include not only ensuring the availability of important data, but also efficient resource sharing of the relatively expensive components. y using high availability server clustering in storage environments, organizations can intelligently combine servers and arrays with the purpose of simultaneously increasing the security and the computing power of the whole system. The server cluster appears to the user as one single system with extraordinary characteristics. This concept can also be applied within Storage rea Networks (SNs). s a strategy to avoid single points of failure in such systems, using host-based controllers for the nodes of the cluster will provide redundancy and security appropriate for mass storage. Intel Intelligent controllers with high availability clustering support are designed for high-end and enterprise servers, helping to make the benefits of data availability and server reliability accessible to businesses of all sizes. Extending Protection to Storage Servers server cluster, simply put, extends technology into the server arena by creating a system in which one server is redundant to another. This increases the safety and computing power of the entire system. technology has evolved from the need to simultaneously improve performance, expand capacity, and safeguard information from potential catastrophes such as a power outage or hardware failure. With, several independent hard disks are combined to form one large logical array, or one logical hard disk. long with data, redundancy information is also stored on the array. This may take the form of mirrored data as in level 1, or parity information as in level 5. The key is that the operating system no longer deals with separate drives, but with the array as one logical drive. While is not a substitute for backup, since it cannot protect data against theft or fire, it does increase data safety and availability. That protection can be taken a step further. Redundancy can be applied not only to the hard drives, but also to the servers that are attached to the arrays. The two most common means of accomplishing this are mirroring and clustering. Server Mirroring Server mirroring simply involves two duplicate servers attached to the same network. One server performs the work while the other receives duplicate (mirrored) data. When one server goes down, the other is ready to take over in a matter of minutes. major benefit is that the servers can be in separate rooms or buildings for disaster protection. They can even be in different geographic locations if a powerful enough network connection is provided. There are also few hardware restrictions on cabling, controllers or drives. However, there are significant drawbacks to server mirroring. First, costs can be prohibitive. Organizations purchasing a second server naturally want to put it into production so that it can begin paying for itself, but mirroring requires the second server to be kept in waiting mode. Meanwhile, server maintenance and upgrade costs are effectively doubled. Performance degradation is another concern. esides handling all of the file transfer work for network users, the primary server may have to process additional I/Os as it passes information along to the mirror server. This can add substantial processor overhead if system usage is heavy. If there is a partial failure such as the loss of a disk on the primary server and the mirror begins taking up some of the slack, performance will be slower because disk I/O will have to take place over the network. During a full system failure, data may be lost if the primary server cannot complete transfer of in-process data to the mirror before going down. Server Clustering Instead of requiring one server to wait on another, server clustering allows each server to act on its own while using one common, redundant mass storage device. In a basic server cluster, two servers would share one array. If one server goes down, the second takes over while still maintaining its own workload. The failover time is greatly reduced because each server is already attached to the information that users are seeking. lso, there is less chance that data will be lost during a system failure. Data does not have to be sent to a backup system at time of failure, because it is already in both servers. Just as important, the cost factor is reduced since it is not necessary to have a separate array in each server, and the backup server is not merely waiting for something to do. It is functioning as an active production server on the network, sharing the processing load. 3

White Paper High vailability Server Clustering Solutions Clustering Technologies Most major operating systems including those from Microsoft*, Novell*, and Linux providers such as Red Hat* and TurboLinux* now provide support for server clustering. While details vary, cluster solutions for the most part work in the same basic way. How Clusters Work The servers in a cluster communicate with one another through polling, and are configured so that the operating system no longer deals with separate servers, but one logical system (Figure 1). Each server in the cluster continually polls every other server in the cluster to ensure that they are still operational. This polling takes place over a private heartbeat network, which requires its own separate cable run. Each clustered server is connected to: The network, via LN connection common mass storage device, via shared data bus The other server(s) in the cluster, through a simple interconnect device such as a secondary network card When a failover takes place, all three connections become involved. If the failed server is no longer available over the private heartbeat connection, it is polled on the LN. If this yields no response, then the polling server must take over the failed server s assignment of connecting users to the common data source via the data bus. With interconnected servers operating in a cluster, performance can be improved by adjusting the load on each. With some operating systems, however, dynamic load balancing is not possible at this time. speed advantage is only possible through static load balancing, where two different applications are running parallel on different servers in the cluster for example, an SQL Database on one and a Web Server on the other. Current Connection Types SCSI For servers located on the same rack in the same room, SCSI (Small Computer System Interface) technology is likely sufficient. parallel data transfer technology, SCSI is economical but has distance and bandwidth limitations. Even though Ultra 320 has pushed the data transfer rates to 320 M/s, SCSI still has the limitation of cable length and number of devices. There is also standard differential signaling, which allows up to 25 meters in length, but it is very expensive, not widely used and incompatible with other SCSI formats. Fibre Channel For a higher data transfer rate and longer cable lengths, the choice is Fibre Channel rbitrated Loop (FC-L) technology for host-based controllers. The longer cable run allows an organization to increase the number of attached devices per cable and locate them in separate rooms or facilities. s with SCSI, all devices on the cable share the same bandwidth. Client Node 1 User LN Client Shared Storage us Private Network Disk Enclosure Figure 1. Node 2 4

High vailability Server Clustering Solutions White Paper T T T T T T Node 1 3 Channel SCSI Controllers rray Figure 2. Node 2 Redundancy also at the SCSI connections Failure of a cable does not lead to a crash of the array Fibre Channel is a serial, high-speed data transfer technology. It is not limited to the transmission of signals through optical fibers, but can also use less expensive copper wiring, either twisted pair or coaxial. FC is an open industry standard and can be viewed as a transport vehicle for the supported command set (usually SCSI commands). Unaware of content, Fibre Channel simply packs data into frames, transports the frames to the appropriate devices, and provides error checking. FC-L makes it possible to attach up to 126 devices on a single channel and allows distances up to 25 meters using copper cabling. With multi mode optical fiber transmission, a distance of up to 500 meters can be achieved between devices (not just in overall cable length). Using single mode optical fiber, the distance between devices can be up to 10,000 meters or 10km. Deployment Methods lthough not the only solution, the choice of host-based controllers for the nodes of a cluster helps to eliminate the single point of failure that would be presented by locating control on the disk array. The host-based controller typically resides on an adapter card that offloads -related tasks including host interrupts, freeing the server CPU to handle the application load. There are several different ways of setting up a cluster. For example, Figure 2 depicts a setup using two cluster servers, each with a three-channel SCSI controller configured in a array. y placing each disk or set of disks on a separate controller, there is also redundancy at the SCSI connections. Therefore, failure of a cable does not lead to a crash of the array. 5

White Paper High vailability Server Clustering Solutions Using Fibre Channel technology, two dual-channel FC-L controllers can be used along with redundant Fibre hubs in a dual loop (redundant cable) configuration (Figure 3). In this situation, the Fibre hubs allow the organization to disconnect one server from the cluster and still maintain a certain level of redundancy in the event the enclosure is not configured to automatically close an open loop. For maximum security, the same configuration could use redundant enclosures (Figure 4). Node 2 Node 1 Redundant Fibre Hubs Redundant Fibre Hubs Figure 3. Dual Channel Fibre Channel Controllers Enclosure Fibre Hubs allow to disconnect one node from the cluster Redundant Hubs and Dual Loop configuration (redundant cabling) Node 2 Node 1 Redundant Fibre Hubs Redundant Fibre Hubs Enclosure Figure 4. Dual Channel Fibre Controllers and redundant hubs Enclosure Configuration with redundant enclosures for maximum security. 6

SN SN SN LN High vailability Server Clustering Solutions White Paper SN pplications With increasing requirements for continuous streams of information and zero downtime, many businesses can benefit from a Storage rea Network. However, the cost of implementing a SN can be prohibitive. The combination of and high availability server clustering can make entry into the SN storage arena much more feasible. Implementing FC-L solutions in the SN environment is relatively easy and can be very powerful. Fibre Channel is preferable for SNs because it provides outstanding data transfer rates and supports many devices and extremely long cable lengths. Host-based controllers offer an affordable solution into the entry-level SN field, providing interoperability and scalability. To set up a very basic SN island, an intelligent FC controller is inserted into a server. The server can then be connected to an FC hub that has a link to a disk enclosure. This setup is attached to a LN via the server. The hub gives this SN island an easy way to scale into a larger SN environment when resources permit. For example, adding more storage is as simple as adding another FC disk enclosure to the existing hub. dditional hubs (or split hub) can carry this one step further by adding cable redundancy and a failover plan. The dual loop configuration of the controller can also add increased bandwidth. Multiple SN islands can be configured together into a larger SN environment (Figure 5). Cluster Server Cluster C Server Server Server Cluster Server Server Figure 5. 7

White Paper High vailability Server Clustering Solutions Conclusion Increasing network content, database applications, rich email, and e-commerce are driving rapid growth in the volume of data moving across public and enterprise IP networks. In turn, this is driving the demand for affordable, reliable and accessible storage available anywhere on the network. To meet these needs, any network that runs a mission critical database should consider the combination of and server clustering. This includes online banking sites, auctions, stores or any other type of e-commerce site that depends on uninterrupted access. In fact, the Internet has become a requirement of practically all commerce today. ny downtime can have a severe impact on the entire organization. This is the driving force behind the need for and server clusters as components in more and more enterprise networks. Intel supplies a wide range of interoperable building blocks for storage at any level of integration, including cluster-capable FC controllers. Intel s open, standards-based approach fosters a community of providers who are accelerating the availability of safe and reliable, high performance storage solutions that result in lower total costs of storage development and ownership. For more information For more information about Intel controllers and -on Motherboard, visit: http://developer.intel.com/design/servers/buildingblocks/ For more information, visit the Intel web site at: http://developer.intel.com/design/storage Intel may make changes to specifications and product descriptions at any time, without notice. *Other names and brands may be claimed as the property of others. Intel, and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Information in this document is provided in connection with Intel products. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted by this document. Except as provided in Intel s Terms and Conditions of Sale such products, Intel assumes no liability whatsoever, and Intel disclaims any express or implied warranty, relating to sale and/or use of Intel products including liability or warranties relating to fitness for a particular purpose, merchantability, or infringement of any patent, copyright or other intellectual property right. Intel products are not intended for use in medical, life-saving, or life-sustaining applications. Intel may make changes to specifications and product descriptions at any time, without notice. Copyright 2002 Intel Corporation. ll rights reserved. 0902/SI/VM/MTL/IW/1K C Please Recycle 251574-001