Creating Web Farms with Linux (Linux High Availability and Scalability)



Similar documents
How To Run A Web Farm On Linux (Ahem) On A Single Computer (For Free) On Your Computer (With A Freebie) On An Ipv4 (For Cheap) Or Ipv2 (For A Free) (For

UNIVERSITY OF OSLO Department of Informatics. Performance Measurement of Web Services Linux Virtual Server. Muhammad Ashfaq Oslo University College

Availability Digest. Redundant Load Balancing for High Availability July 2013

ClusterLoad ESX Virtual Appliance quick start guide v6.3

Active-Active Servers and Connection Synchronisation for LVS

Load Balancing und Load Sharing

Efficient Addressing. Outline. Addressing Subnetting Supernetting CS 640 1

Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions

DNS ROUND ROBIN HIGH-AVAILABILITY LOAD SHARING

High Performance Cluster Support for NLB on Window

Network Layers. CSC358 - Introduction to Computer Networks

How To Use The Cisco Ace Module For A Load Balancing System

VERITAS Cluster Server Traffic Director Option. Product Overview

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

Introduction. Linux Virtual Server for Scalable Network Services. Linux Virtual Server. 3-tier architecture of LVS. Virtual Server via NAT

Building a Highly Available and Scalable Web Farm

High Availability and Clustering

How To Balance A Load Balancer On A Server On A Linux (Or Ipa) (Or Ahem) (For Ahem/Netnet) (On A Linux) (Permanent) (Netnet/Netlan) (Un

Monitoring Load-Balancing Services

Load Balancing and Sessions. C. Kopparapu, Load Balancing Servers, Firewalls and Caches. Wiley, 2002.

Loadbalancer.org. Loadbalancer.org appliance quick setup guide. v6.6

Load Balancing Web Proxies Load Balancing Web Filters Load Balancing Web Gateways. Deployment Guide

Scalable Linux Clusters with LVS

ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy

Linux Virtual Server Tutorial

Networking TCP/IP routing and workload balancing

Load Balancing McAfee Web Gateway. Deployment Guide

Scalable Linux Clusters with LVS

VXLAN: Scaling Data Center Capacity. White Paper

Application. Transport. Network. Data Link. Physical. Network Layers. Goal

Appliance Quick Start Guide. v7.6

CCNP Switch Questions/Answers Implementing High Availability and Redundancy

CNS-200-1I Basic Administration for Citrix NetScaler 9.0

Load Balancing using Pramati Web Load Balancer

Fundamentals of Windows Server 2008 Network and Applications Infrastructure

How To Load Balance On A Bgg On A Network With A Network (Networking) On A Pc Or Ipa On A Computer Or Ipad On A 2G Network On A Microsoft Ipa (Netnet) On An Ip

High Availability HTTP/S. R.P. (Adi) Aditya Senior Network Architect

Routing Security Server failure detection and recovery Protocol support Redundancy

Implementing the Application Control Engine Service Module

ERserver. iseries. TCP/IP routing and workload balancing

Load Balancing Microsoft Remote Desktop Services. Deployment Guide

High Availability Low Dollar Load Balancing

Astaro Deployment Guide High Availability Options Clustering and Hot Standby

ELIXIR LOAD BALANCER 2

How To Handle A Power Outage On A Server Farm (For A Small Business)

Load Balancing Web Applications

Linux Virtual Server Administration. RHEL5: Linux Virtual Server (LVS)

Building Secure Network Infrastructure For LANs

Load Balancing Smoothwall Secure Web Gateway

Migrating the SSL Offloading Configuration of the Alteon Application Switch 2424-SSL to AlteonOS version

ENTERPRISE DATA CENTER CSS HARDWARE LOAD BALANCING POLICY

Networking and High Availability

Deploying Microsoft SharePoint Services with Stingray Traffic Manager DEPLOYMENT GUIDE

Introduction to Linux Virtual Server and High Availability

Building a Systems Infrastructure to Support e- Business

Cisco AnyConnect Secure Mobility Solution Guide

Load Balancing for Microsoft Office Communication Server 2007 Release 2

Appliance Administration Manual. v6.21

Red Hat Enterprise Linux 7 Load Balancer Administration

Exam : EE : F5 BIG-IP V9 Local traffic Management. Title. Ver :

Chapter 11 Cloud Application Development

Citrix NetScaler 10 Essentials and Networking

TESTING & INTEGRATION GROUP SOLUTION GUIDE

F5 BIG-IP V9 Local Traffic Management EE Demo Version. ITCertKeys.com

Securing Networks with PIX and ASA

Recommended IP Telephony Architecture

Networking and High Availability

Overview: Load Balancing with the MNLB Feature Set for LocalDirector

Load Balancing Barracuda Web Filter. Deployment Guide

Intel Advanced Network Services Software Increases Network Reliability, Resilience and Bandwidth

Configuring Network Address Translation (NAT)

Smoothwall Web Filter Deployment Guide

First Hop Redundancy (Layer 3) 1. Network Design First Hop. Agenda. First Hop Redundancy (Layer 3) 2. L102 - First Hop Redundancy

Top-Down Network Design

Linux Virtual Server Administration. Linux Virtual Server (LVS) for Red Hat Enterprise Linux 5.2

Basic Administration for Citrix NetScaler 9.0

Load Balancing Sophos Web Gateway. Deployment Guide

Operating System Concepts. Operating System 資 訊 工 程 學 系 袁 賢 銘 老 師

High Availability and Load Balancing Cluster for Linux Terminal Services

PowerLink Bandwidth Aggregation Redundant WAN Link and VPN Fail-Over Solutions

Load Balancing Bloxx Web Filter. Deployment Guide

Deployment Guide. AX Series with Microsoft Office SharePoint Server

Overview of WebMux Load Balancer and Live Communications Server 2005

Content Networking Fundamentals

GPRS / 3G Services: VPN solutions supported

Disaster Recovery White Paper

Layer 4-7 Server Load Balancing. Security, High-Availability and Scalability of Web and Application Servers

Microsoft Lync Server Overview

Outline VLAN. Inter-VLAN communication. Layer-3 Switches. Spanning Tree Protocol Recap

Transport and Network Layer

Technical Support Information Belkin internal use only

Load Balancing Trend Micro InterScan Web Gateway

Packet Sniffing on Layer 2 Switched Local Area Networks

Appliance Administration Manual. v7.2

Transcription:

Creating Web Farms with Linux (Linux High Availability and Scalability) Horms (Simon Horman) horms@verge.net.au December 2001 For Presentation in Tokyo, Japan http://verge.net.au/linux/has/ http://ultramonkey.org/

Introduction This presentation will focus on: What technologies are available for High Availability and Scalability. Building a Highly Available, Load Balanced Cluster using Ultra Monkey. 1

What is High Availability? In the context of this paper: The ability to provide some level of service during a situation where one or more components of a system has failed. The failure may be scheduled or unscheduled. 2

What is High Availability? The key to achieving high availability is to eliminate single points of failure. A single point of failure occurs when a resource has only one source: A web presence is hosted on a single Linux box running Apache. A site that has only one link to the internet. 3

What is High Availability? Elimination of single points of failure inevitably requires provisioning additional resources. It is the role of high availability solutions to architect and manage these resources. The aim is that when a failure occurs users are still able to access the service. This may be a full or degraded service. 4

What is Scalability? Scalability refers to the ability to grow services in a manner that is transparent to end users. Typically this involves growing services beyond a single chassis. DNS and layer 4 switching solutions are the most common methods of achieving this. Data replication across machines can be problematic. 5

Technologies: IP Address Takeover If a machine, or service running on a machine, becomes unavailable, it is often useful to substitute another machine. IP Address Takeover allows a hot stand-by to assume the IP address of a master host. 6

IP Address Takeover: ARP 1. ARP Request 2. ARP Reply 3.Saved to Arp Cache Host B Host A Host A Arp Cache 1. Host A sends out an ARP request for the hardware address of an IP address on host B. 2. Host B sees the request and sends an ARP reply containing the hardware address for the interface with the IP address in question. 3. Host A then records the hardware address in its ARP cache. Entries in an ARP cache typically expire after about two minutes. 7

IP Address Takeover: Gratuitous ARP 1. ARP Reply 2.Saved to Arp Cache Host B Host A Host A Arp Cache A gratuitous ARP is an ARP reply when there was no ARP request. If addressed to the broadcast hardware address, all hosts on the LAN will receive the ARP reply and refresh their ARP cache. 8

IP Address Takeover: Gratuitous ARP If gratuitous ARPs are sent often enough: No host s ARP entry for the IP address in question should expire. No ARP requests will be sent out. No opportunity for a rogue ARP reply from the failed master. 9

Technologies: Layer 4 Switching Ability to load balance connections received from end-users to real servers. This can be implemented in an ethernet switch such as the Alteon Networks ACESwitch. It can also be done in a host such as a Linux box with the Linux Virtual Server (LVS). 10

Layer 4 Switching: Virtual Server A Virtual Server is the point of contact for by end-users and is typically advertised through DNS. A virtual server is defined by: An IP address, port and protocol (UDP/IP or TCP/IP) Matches made by a packet filter 11

Layer 4 Switching: Scheduling Algorithm The virtual service is assigned a scheduling algorithm. The scheduling algorithm determines which back-end server an incoming TCP/IP connection or UDP/IP datagram will be sent to. Packets for the life of a TCP/IP connection will be forwarded to the same back-end server. Optionally, subsequent TCP/IP connections or UDP/IP datagrams from a host or network to be forwarded to the same back-end server. This is useful for applications such as HTTPS where the encryption used relies on the integrity of a handshake made between the client and a server. 12

Layer 4 Switching: Forwarding When a packet is to be forwarded to a back-end server, several mechanisms are commonly employed: Direct Routing IP-IP Encapsulation (Tunnelling) Network Address Translation (NAT) 13

Technologies: Heartbeat A heartbeat is a message sent between machines at a regular interval of the order of seconds. If a heartbeat isn t received for a time, the machine that should have sent the heartbeat is assumed to have failed. Used to negotiate and monitor the availability of a resource, such as a floating IP address. 14

Technologies: Heartbeat Typically when a heartbeat starts on a machine it will perform an election process with other machines. On heartbeat networks of more than two machines it is important to take into account network partitioning. When partitioning occurs it is important that the resource is only owned by one machine, not one machine in each partition. 15

Heartbeat: Transport It is important that the heartbeat protocol and the transport that it runs on is as reliable as possible. Effecting a fail-over because of a false alarm may be highly undesirable. It is also important to react quickly to an actual failure. It is often desirable to have heartbeat running over more than one transport. 16

Putting It Together: Ultra Monkey Suite of tools to enable a load balanced, highly available farm of servers on a LAN. LVS for load balancing. Heartbeat for High Availability between load balancers Ldirectord monitors the health of real servers Sample topologies for Highly Available and/or Load Balanced networks are provided. Single Point of Contact for an Open Source solution. 17

Sample Configuration Internet External Network Linux Director Linux Director Server Network Real Server Real Server Real Server 18

Ultra Monkey: Linux Directors Top layer of servers handles load balancing clients to real servers. Incoming traffic travels through the active Linux Director. The other Linux Director is a hot stand-by. 19

Ultra Monkey: Linux Directors Heartbeat is run between the two Linux Directors. IP address Takeover is effected if the master Linux Director fails. Other resources may be defined 20

Ultra Monkey: Linux Directors Ldirectord is used to monitor the real servers A page is requested periodically and the output parsed Adds and removes servers from the load balancing pool as necessary 21

Sample Web Farm: Real Servers Any processing of client requests is done by these servers. Processing power can easily be increased by adding more servers. Where possible state should is stored on clients by either using cookies or encoding the state into the URL. Client doesn t need to repeatedly connect to the same server. More flexible load-balancing. Session can continue even if a server fails. 22

Conclusion High Availability and Scalability are critical issues for network based services. Using a variety of well known techniques this can be realised. Ultra Monkey provides packages and documentation for deploying this under Linux. 23

Creating Web Farms with Linux (Linux High Availability and Scalability) Horms (Simon Horman) horms@verge.net.au December 2001 For Presentation in Tokyo, Japan http://verge.net.au/linux/has/ http://ultramonkey.org/