i Load balancer relays request to selected node



Similar documents
Hay (43) Pub. Date: Oct. 17, 2002

US Al (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2012/ A1 Lundstrom (43) Pub. Date: NOV.

\ \ \ connection connection connection interface interface interface

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2014/ A1 FAN et al. (43) Pub. Date: Feb.

US Al (19) United States (12) Patent Application Publication (10) Pub. No.: US 2009/ A1 Sanvido (43) Pub. Date: Jun.

wanagamem transformation and management

(Us) (73) Assignee: Avaya Technology Corp. Je?' McElroy, Columbia, SC (US); (21) Appl. No.: 10/413,024. (22) Filed: Apr. 14, 2003 (57) ABSTRACT

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2007/ A1 Ollis et al. HOME PROCESSOR /\ J\ NETWORK

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2002/ A1 Fukuzato (43) Pub. Date: Jun.

Patent Application Publication Sep. 30, 2004 Sheet 1 0f 2. Hierarchical Query. Contact Ow FIG. 1

(30) Foreign Application Priority Data

(12) Patent Application Publication (10) Pub. No.: US 2003/ A1 Wu et al. (43) Pub. Date: Feb. 20, 2003

US Al (19) United States (12) Patent Application Publication (10) Pub. No.: US 2007/ A1 Voight et al. SUBSCRIBER DATABASE.

software, and perform automatic dialing according to the /*~102

US Al (19) United States (12) Patent Application Publication (10) Pub. No.: US 2012/ A1 Kuehl (43) Pub. Date: Aug.

Telephone Dressing Systems - Advantages and Disadvantages

remote backup central communications and storage facility

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2007/ A1. Operating System. 106 q f 108.

(12) United States Patent (16) Patent N6.= US 6,198,814 B1 Gill (45) Date of Patent: Mar. 6, 2001

(12) United States Patent Edelen

NETWORK BOUNDARY PRIVATE NETWORK PUBLIC _1 NETWORK

(71) Applicant: SPEAKWRITE, LLC,Austin, TX (US)

(54) Applicant: (71) (72) Assignee: (73) (21) (22) (60)

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2005/ A1 Owhadi et al. (43) Pub. Date: Feb.

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2012/ A1 Chung (43) Pub. Date: Aug.

60 REDIRECTING THE PRINT PATH MANAGER 1

Lookup CNAM / other database for calllng

US A1 (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2013/ A1 Pi0t (43) Pub. Date: May 30, 2013

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2013/ A1 Chen (57)

US A1 (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2002/ A1. Mannarsamy (43) Pub. Date: NOV.

(54) RETARGETING RELATED TECHNIQUES (52) US. Cl /1453 AND OFFERINGS. (75) Inventors: Ayrnan Farahat, San Francisco, (57) ABSTRACT

;111: ~~~~~~~~~~~~~~~~~~~ [73] Assigneez Rockwell Semiconductor Systems 5,754,639 5/1998 Flockhart et al...

(54) RAPID NOTIFICATION SYSTEM (52) US. Cl /206. (57) ABSTRACT (75) Inventors: Anand Rajasekar, San Jose, CA

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2002/ A1 STRANDBERG (43) Pub. Date: Oct.

205 Controller / 205

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2014/ A1 Kota et al. (43) Pub. Date: Dec.

(10) Patent N0.: US 6,570,581 B1 Smith (45) Date of Patent: May 27, 2003

(12) United States Patent

(12) United States Patent (16) Patent N6.= US 6,611,861 B1 Schairer et al. (45) Date of Patent: Aug. 26, 2003

(12) Patent Application Publication (10) Pub. No.: US 2013/ A1 Kim et al. (43) Pub. Date: Dec. 5, 2013

Naylor, Lake OsWego, OR (US) (51) Int_ CL

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2002/ A1 Boyer et al. (43) Pub. Date: Aug.

(IP Connection) Miami (54) (76) (21) (22) (51) (52) Application

Support systems messaging via

(43) Pub. Date: Jan. 24, 2008

Ulllted States Patent [19] [11] Patent Number: 6,141,545

Levy Processing System 1_Q

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2012/ A1 Lee (43) Pub. Date: Mar.

(12) United States Patent (10) Patent N0.: US 8,282,471 B1 Korner (45) Date of Patent: Oct. 9, 2012

US A1 (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2013/ A1 Simburg (43) Pub. Date: Dec.

[11] [45] USER ANSWERS TELEPHONE CALL FOR CLIENT USING WEB-ENABLED TERMINAL 18 WEB-ENABLED TERMINAL 1B LOOKS UP CLIENT

United States. (12) Patent Application Publication (10) Pub. No.: US 2013/ A1 Koonce et al. (43) Pub. Date: Oct. 10, 2013 (19) (54) (71) (72)

I SEARCH DATABASE l/ VISIT WEBSITE k ( UPDATE RECORDS Y (54) (75) (73) (21) (22) (63) (60) (US); Gary Stephen Shuster, Oakland, SELECT SUB-DOMAIN NAME

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2014/ A1 LEE et al. (43) Pub.

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2013/ A1 Fan et al.

(43) Pub. Date: Feb. 16, 2012

GATEWAY ' o o o

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2013/ A1 Warren (43) Pub. Date: Jan.

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2005/ A1 Kelly et al. (43) Pub. Date: Feb.

/ \33 40 \ / \\ \ \ M / f 1. (19) United States (12) Patent Application Publication Lawser et al. NETWORK \ 36. SERVlCE 'NTERNET SERVICE

Vignet (43) Pub. Date: Nov. 24, 2005

DATA CE NTER. CONFIGURATION DATAEAsE M CONTENT... CONTENT M M SERVICE... SERVICE % % SERVER Q DATA STORAGE MEDIUM 2 APPLICATION...

US A1 (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2003/ A1 Ritchc (43) Pub. Date: Jun.

(12) United States Patent (10) Patent N0.: US 7,068,424 B1 Jennings et al. (45) Date of Patent: Jun. 27, 2006

US A1 (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2013/ A1 DANG (43) Pub. Date: Jul.

(21) (22) (57) ABSTRACT. Appl. No.: 10/752,736

Ff'if ~ _ INVISIWALL. Shively (43) Pub. Date: NOV. 28, LOCAL ONSITE. (Us) (21) Appl. No.: 09/865,377

(12) Ulllted States Patent (10) Patent N0.: US 8,028,070 B2 Boyd et al. (45) Date of Patent: Sep. 27, 2011

(54) SYSTEM AND METHOD FOR OBTAINING Publication Classi?cation AND EXECUTING INSTRUCTIONS FROM A (51) Int Cl PRIVATE NETWORK G06F 15/16 (2006.

Access List: my-fw-rule

Cunneciiun to credit cards dltabase. The system analyzes all credit cards aeecums.

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2009/ A1 CROSBY (43) Pub. Date: Feb.

(54) LOTTERY METHOD Publication Classi?cation

(73) Assignee: Realovation Holdings, LLC d/b/a/ systems methods and Software? for improving Communic?

USOOS A Ulllted States Patent [19] [11 Patent Number: Dezonno et al. [45] Date of Patent: *May 25, 1999

Videoconferencing Endpoint

Filetto et al. [45] Date of Patent: Feb. 15, 2000

TEPZZ 9 Z5A_T EP A1 (19) (11) EP A1. (12) EUROPEAN PATENT APPLICATION published in accordance with Art.

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2013/ A1 Forrester (43) Pub. Date: Oct.

(72) Inventors: Juergen RIEDL, Koenigsbrunn (DE); USPC ( 267/285)

l / Normal End, client 1 granted access to " System 1

Back up information data by blocks, and generate backup data of each block

(12) United States Patent (10) Patent No.: US 8,253,226 B2 Oguri (45) Date of Patent: Aug. 28, 2012

T0 THE USER EE JEQE, THE REWRWTEN CQNTENT includeng A REFERENCE N132

Psychic Psychic Psychic Psychic Psychic

(12) (10) Patent N0.: US 6,614,314 B2 d Haene et al. 45 Date 0f Patent: Se (54) NON-LINEAR PHASE DETECTOR FOREIGN PATENT DOCUMENTS

llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllilll

110-\ CALLER TERMINAL

(54) METHODS AND SYSTEMS FOR FINDING Publication Classi?cation CONNECTIONS AMONG SUBSCRIBERS TO AN CAMPAIGN (51) Int- Cl

United States Patent [191 Brugliera et al.

Title and Navigation Bar

US Al (19) United States (12) Patent Application Publication (10) Pub. No.: US 2013/ A1 Weng et al. (43) Pub. Date: Sep.

(54) (75) ( ) (73) (21) (22) (63) Peschel, Schoengeising (DE); (30) Foreign Application Priority Data. Robert Trimpe, Wessling (DE)

TEPZZ 65Z79 A_T EP A1 (19) (11) EP A1. (12) EUROPEAN PATENT APPLICATION published in accordance with Art.

United States Patent [19] [11] Patent Number: 5,347,302

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2008/ A1 Long et al. (43) Pub. Date: Jul.

ADD UPLOADED DATA TO CLOUD DATA REPOSITORY

(12> Ulllted States Patent (16) Patent N6.= US 6,320,621 B1 Fu (45) Date of Patent: Nov. 20, 2001

(12) United States Patent Halonen

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2003/ A1. Porras (43) Pub. Date: May 15, 2003

Transcription:

US 20040243709A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2004/0243709 A1 Kalyanavarathan et al. (43) Pub. Date: Dec. 2, 2004 (54) SYSTEM AND METHOD FOR CLUSTER-SENSITIVE STICKY LOAD BALANCING (75) Inventors: Vasanth Kalyanavarathan, Bangalore (IN); Sivasankaran R., Tamilnadu (IN) Correspondence Address: Robert C. Kowert Meyertons, Hood, Kivlin, Kowert & Goetzel, P.C. PO. Box 398 Austin, TX 78767 (US) (73) Assignee: Sun Microsystems, Inc., Santa Clara, CA (21) (22) Appl. No.: 10/445,493 Filed: May 27, 2003 Publication Classi?cation (51) Int. Cl?..... G06F 15/173 (52) US. Cl...... 709/226 (57) ABSTRACT A system and method for cluster-sensitive sticky load bal ancing of server Workload may include a load balancer receiving an initial request from a client. A session may be initiated in response to receiving the request. The load balancer may relay the initial request to a selected node, Where the selected node may be part of a cluster of multiple nodes. Upon receiving a subsequent request pertaining to the session initiated by the initial request, the load balancer may determine if the selected node is active. If the selected node is active, the load balancer may relay the subsequent request to the selected node. If the selected node is not active, the load balancer may determine for Which cluster the selected node Was a member, and relay the subsequent request to another node in that same cluster. Load balancer receives inilial request 03L balancer executes selection scheme on pool of available nodes in the distributed system i Load balancer relays request to selected node l Selected node services request, returns result along with session information i Load balancer receives subsequent request related to the same session Is selected node active? 210 Load balancer determines which nodes are in same cluster as selected node 2 Load balancer executes selection scheme on pool of nodes in same cluster as selected node 21A i

Patent Application Publication Dec. 2, 2004 Sheet 1 0f 6 US 2004/0243709 A1 E26 3 a 638885 09S 8mm 85% Dog H826 e2 5253 nmoq T avg Ta? %oz_ l? moi QED 350m lmllol E26 l.6: 252 % $85855 629$ F GE

Patent Application Publication Dec. 2, 2004 Sheet 2 0f 6 US 2004/0243709 A1 Load balancer receives initial request 200 Load balancer executes selection scheme on pool of available nodes in the distributed system Load balancer relays request to selected node M l Selected node services request, returns result along with session information M l Load balancer receives subsequent request related to the same session 208 Is selected node active? 210 Load balancer determines which nodes are in same cluster as selected node 2_12 Load balancer executes selection scheme on pool of nodes in same cluster as selected node 243 FIG. 2

Patent Application Publication Dec. 2, 2004 Sheet 3 0f 6 US 2004/0243709 A1 Load balancer l? Cluster mapping table _3_l_Q Cluster 1: A, B, C, D Cluster 2: E, F, G, H Cluster 3: I, J, K, L Session mapping table QLQ Session 1: D Session 2: G Session 3: I-l Session 4: Session 5: Session 6: Session '7 : Session 8: Session 9: F FIG. 3

Patent Application Publication Dec. 2, 2004 Sheet 4 0f 6 US 2004/0243709 A1 g3 5228 23 Do? MEG 358 moi ENG 35% Uom? HBmEU mos $530 E26 mg 36055885 <02 80523 30A a EEQEEQ 803% 003.5822 83 ms? 5822 85 <92 ENG 35%

Patent Application Publication Dec. 2, 2004 Sheet 5 of 6 US 2004/0243709 A1 Load balancer receives initial request E Load balancer executes the selection scheme on pool of available nodes in the distributed system.522. Load balancer relays request to selected node E Selected node services request, returns result along with session information M Load balancer receives subsequent request related to the same session 23 Load balancer relays request to selected node Load balancer receives indication that selected node is non-functional 512 Load balancer executes selection scheme on nodes in same cluster as selected node M Load balancer relays request to newly selected node L19 FIG. 5

Patent Application Publication Dec. 2, 2004 Sheet 6 0f 6 US 2004/0243709 A1 Computer Subsystem @(E F 6 Main Memory 1 Load Balancer l5_0 Processor 610A Processor 6108 1/0 Interface Q11 Network interface Q?

US 2004/0243709 A1 Dec. 2, 2004 SYSTEM AND METHOD FOR CLUSTER-SENSITIVE STICKY LOAD BALANCING BACKGROUND OF THE INVENTION [0001] 1. Field of the Invention [0002] This invention relates to the?eld of distributed computing systems and, more particularly, to load balancing and fail-over in clustered distributed computing systems. [0003] 2. Description of the Related Art [0004] As Workloads on modern computer systems become larger and more varied, more and more computa tional and data resources may be needed. For example, a request from a client to Web site may involve a load balancer, a Web server, a database, and an application server. Alternatively, some large-scale scienti?c computations may require multiple computational nodes operating in synchro nization as a kind of parallel computer. [0005] Any such collection of computational resources and/or data resources tied together by a data network may be referred to as a distributed system. Some distributed systems may be sets of identical nodes each at a single location connected together by a local area network. Alternatively, the nodes may be geographically scattered and connected by the Internet, or a heterogeneous mix of computers, each acting as a different resource. Each node may have a distinct operating system and be running a different set of applica tions. [0006] Nodes in a distributed system may also be arranged as clusters of nodes, With each cluster Working as a single system to handle requests. Alternatively, clusters of nodes in a distributed system may act semi-independently in handling a plurality of Workload requests. In such an implementation, each cluster may have one or more shared data sources accessible to all nodes in the cluster. [0007] Workload may be assigned to distributed system components via a load balancer (or hierarchy of load bal ancers), Which relays requests to individual nodes or clus ters. For some requests it may be desirable for a client speci?c session history to be maintained by the distributed system. In such an application, a client and a node in the distributed system Will typically interact several times, With a response from the node necessitating a subsequent request from a client, Which in turn leads to another response from the node, and so on. For example, e-commerce may require that a server be aware of What?nancial information the client has already provided. This history may be tracked by providing information such as a session tracking number or session identi?er (ID) to the client, often in the form of a cookie. This information is returned to the distributed sys tem along With all future transaction requests from the client that are part of the session, so that the distributed system may use the session tracking number to look up previous transaction history and manage multiple concurrent client session histories. [0008] One dif?culty involved With managing session histories is that different nodes in different clusters may not have access to the same data sources, and thus, the same session histories. Alternatively, accessing data in other clus ters or nodes may incur excess synchronization overhead or take much longer than accessing data local to a cluster or node. Because of this, load balancers may execute sticky load balancing, Wherein a client request continuing a given session is sent to the same node that originated the session. Sticky load balancing generally involves a load balancer tracking the node currently handling a given session, often through a node identi?cation number or node address asso ciated With the session ID and/or bundled With the client requests. [0009] A further dif?culty With sticky load balancing may occur When the node handling a client session fails. The load balancer may send client requests for that session to another node in the system that does not have access to the client session history. This may lead to a timeout or communica tion error, since the new server Would be unable to access the client s session history, Which may in turn require a failure or restart of the session. SUMMARY [0010] A system and method for cluster-sensitive sticky load balancing of server Workload is disclosed. The method may include a load balancer receiving an initial request from a client. A session may be initiated and the load balancer may relay the initial request to a selected node, Where the selected node may be part of a cluster of multiple nodes. Each node in the cluster may share one or more common data sources, and multiple clusters of multiple nodes may embody a distributed system. Upon receiving a subsequent request relating to the session initiated by the initial request, the load balancer may determine if the selected node is active. If the selected node is active, the load balancer may relay the subsequent request to the initial node. If the selected node is not active, the load balancer may determine Which nodes are part of the same cluster as the selected node, and relay the subsequent request to another node in that same cluster. BRIEF DESCRIPTION OF THE DRAWINGS [0011] FIG. 1 is a block diagram of a distributed system, according to one embodiment. [0012] FIG. 2 is a How diagram illustrating one embodi ment of a method for cluster-sensitive sticky load balancing. [0013] FIG. 3 illustrates a cluster mapping table and session mapping table, according to one embodiment. [0014] FIG. 4 illustrates an embodiment of a distributed system, including a hierarchical load balancer. [0015] FIG. 5 is a How diagram illustrating an embodi ment of a method for cluster-sensitive sticky load balancing. [0016] FIG. 6 illustrates an exemplary computer sub system for implementing certain embodiments. [0017] While the invention is susceptible to various modi?cations and alternative forms, speci?c embodiments are shown by Way of example in the drawings and are herein described in detail. It should be understood, however, that drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modi?cations, equivalents and alternatives falling Within the spirit and scope of the present invention as de?ned by the appended claims.

US 2004/0243709 A1 Dec. 2, 2004 DETAILED DESCRIPTION OF EMBODIMENTS [0018] Turning now to FIG. 1, a block diagram of a distributed system 100 is shown. Distributed system 100 includes multiple nodes 110A-L, arranged in clusters 120A C. Clusters 120A-C are coupled via interconnect 140 to load balancer 150, Which is in turn connected to clients 160A-C via network 170. [0019] Each cluster 120A-C may be operable to handle a plurality of requests from clients 160A-C for a variety of functions. Such functions may include, but are not limited to, acting as a database, Web server, directory server, appli cation server, or e-commerce server. [0020] Load balancer 150 is operable to receive requests from clients 160A-C via network 170 and send these requests to clusters 120A-C in a balanced fashion so that no single cluster 120A-C is overwhelmed With or starved of requests. This process is known as load balancing the requests. Requests may be load balanced between clusters 120A-C by a round-robin scheme, a priority-based scheme, a load-tracking scheme, a combination thereof, or any other type of applicable scheme. Load balancer 150 is also oper able to execute a method for sticky load balancing, as Will be described in further detail below. [0021] Both interconnect 140 and network 170 may be a point-to-point fabric, a local area network (LAN), a Wide area network (WAN), the Internet, any other type of com puting interconnect, or a combination thereof. [0022] Each cluster 120A-C includes data source 130A-C, Which is connected to the cluster by interconnect 140. All nodes Within each cluster may access the same data from the data source 130A-C associated With that particular cluster. It is noted that in one embodiment, nodes in clusters 120A-C may access data sources 130A-C on a non-local cluster. For example, node 110A may access data source 130C. HoW ever, such a remote access may take signi?cantly longer than a local access (e.g. node 110A accessing data source 130A). [0023] It is noted that many of the details in FIG. 1 are purely illustrative, and that other embodiments are possible. For example, load balancer 150 is pictured as a single system separate from clusters 120A-C. HoWever, load bal ancer 150 may instead be a software process running on a node 110A-L in distributed system 100. [0024] Furthermore, data sources 130A-C may include databases separate from any of the nodes in clusters 120A-C and include various types of storage, such as RAID arrays. Alternatively, data sources 130A-C may be one or more data sources located on one or more nodes 110A-L Within clus ters 120A-C. LikeWise, the number of nodes and clusters in FIG. 1 should be considered purely illustrative. It is noted that a distributed system may contain any plurality of clusters and nodes. [0025] FIG. 2 is a How diagram illustrating a method for cluster-sensitive sticky load balancing. Referring collec tively now to FIGS. 1-2, in 200 load balancer 150 receives an initial request from a client 160A-C. In 202 load balancer 150 executes a selection scheme on the pool of available nodes 110A-L in distributed system 100. This selection scheme may be a round-robin scheme, a priority-based scheme, a method based on current Workload, any other load balancing scheme, or a combination of these methods. [0026] Once a node 100A-L has been selected from dis tributed system 100, load balancer 150 relays the request to the selected node in 204. In 206 the selected node services the request and returns the result back to client 160A-C, along With some type of session information. This session information may be in the form of a cookie, for example. The session information may include information to track a session history, such as a session identi?cation number, a node identi?cation number and/or a node s network address, for example. [0027] In 208 load balancer 150 receives a subsequent request from the same client 160A-C that sent the original request, pertaining to the session started in 200. The load balancer 150 may determine that the request pertains to an existing session through a session identi?cation number sent along With the request by client 160A-C. This session identi?cation number may then be compared to a session mapping table maintained by load balancer 150 to determine Which node 110A-L is the selected node that serviced the original request, as Will be discussed further below. Alter natively, client 160A-C may send an identi?cation number or network address associated With the selected node to indicate Which node handled the original request. [0028] In 210 load balancer 150 determines if the selected node is active, and thus able to service the subsequent request. Load balancer 150 may determine the selected node s status through means of a heartbeat tracking method, Wherein the load balancer periodically sends out messages to determine Which nodes are active. [0029] Alternatively, load balancer 150 may determine Which nodes are active by tracking node response to various requests. For example, if a node 110A-L has failed to respond to any requests in a given time period, load balancer 150 may mark that node 110A-L as unavailable to service future requests. In other embodiments, other techniques may be used to determine the selected node s status, such as using dummy requests, etc. [0030] If the selected node is determined to be available in step 210, load balancer 150 returns to 204, Wherein the subsequent request is forwarded to the selected. The method may then continue through the loop of instructions between 204 and 210 multiple times until the session is ended, as load balancer 150 continually determines that the selected node is able to handle a new subsequent request for the session and forwards the new subsequent request on to the selected node. [0031] If the selected node is determined to be unavailable in 210, load balancer 150 determines Which other nodes 110A-L are part of the same cluster 120A-C as the selected node, as indicated at 212. In one embodiment load balancer 150 may examine a cluster mapping table to determine Which nodes 110A-L are grouped into Which clusters 120A C, as Will be discussed further below. Alternatively, other methods may be used to determine cluster membership [0032] In 214 load balancer 150 once more executes a selection scheme on the pool of available nodes 110A-L in distributed system 100, as previously described in step 202, this time limiting the selection scheme to only those nodes 110A-L in the same cluster 120A-C as the initially selected node. The method then returns to 204, Where load balancer 150 relays the subsequent request to the newly selected node, Which may service the subsequent request using the data source 130A-C to access any relevant session history.

US 2004/0243709 A1 Dec. 2, 2004 [0033] Turning now to FIG. 3, cluster mapping table 310 and session mapping table 320 are illustrated. In the illus trated embodiment, both tables are local to load balancer 150. Cluster mapping table 310, Which represents distributed system 100 as illustrated in FIG. 1, includes three entries, one per cluster 120A-C, each of Which identi?es the nodes 110A-L comprising that cluster. Session mapping table 320 comprises nine entries in this example, one for each active session being handled by distributed system 100. Each entry lists the speci?c node 110A-L handling the given session. [0034] It is further contemplated that, in addition to listing the speci?c node 110A-L handling each session, each entry in session mapping table 320 may additionally contain an indication of the cluster 120A-C associated With that node. For example, entry 1 in table 320 lists node D as handling session 1. In another embodiment, entry 1 Would also list cluster A as the cluster associated With the session, to better facilitate the mechanism for cluster lookup. It is also noted that the number of clusters, sessions, and entries is purely illustrative, and that both cluster mapping table 310 and session mapping table 320 may include any number of relevant entries. [0035] Turning now to FIG. 4, an alternate embodiment of load balancer 150 from FIG. 1 is shown. Here, load balancer 150 is replaced by a hierarchy of load balancer nodes 400A-D. As before, the load balancing mechanism is coupled to clients 160A-C and clusters 120A-C through network 170 and interconnect 150, respectively. [0036] In the illustrated embodiment, load balancer 400A receives requests from clients 160A-C via network 170 as described above in FIG. 1. HoWever, rather than relaying requests directly to clusters 120A-C, load balancer 400A may load balance each request to one of load balancers 400B-D, each of Which is responsible for further distributing requests to an associated cluster 120A-C. It is noted that the number of load balancers 400A-D and levels in the hierar chy of load balancers may differ from What is illustrated in FIG. 4. [0037] As described in FIG. 2 above, load balancers 400A-D execute a method for cluster-sensitive sticky load balancing. An initial request from a client 160 A-C may be propagated down to a selected node 110A-H for servicing. As additional requests pertaining to a session initiated by the initial request are sent through load balancers 400A-D, these additional requests are sent to the selected node 110A-H if the selected node is active. If the selected node is inactive, a lower-level load balancer 400B-D may assign the addi tional requests to another node in the same cluster 120A-C as the selected node. [0038] Referring to FIGS. 1-4, it is further noted that, in the embodiment illustrated in FIG. 4, the method described in FIG. 2 may not require cluster mapping table 310 for lower-level load balancers 400 B-D. Because each load balancer 400B-D relays requests only to one cluster 120A C, any node 110A-L accessible by load balancer 400B-D Will be part of the same cluster 120A-C as the initially selected node. Thus, any newly selected node 110A-L may have access to the same data source 130A-C, and may thus be able to continue the session history. [0039] FIG. 5 is a How diagram illustrating an alternate embodiment a method for cluster-sensitive sticky load bal ancing. Referring collectively now to FIGS. 2 and 5, in 500 load balancer 150 receives a request from clients 160A-C via network 170. In 502, load balancer 150 executes a selection scheme on the pool of available nodes 1110A-L in distributed system 100. [0040] In 504, load balancer 150 relays the request to the selected node, Which services and returns the request, along With session information, in 506. In 508, load balancer 150 receives a subsequent request related to the same session as the?rst request from client 160A-C. Using the session information accompanying the subsequent request, load balancer 150 relays the subsequent request to the selected node in 510. [0041] In 512, load balancer 150 receives an indication that the selected node may be non-functional. This indica tion may be in the form of a signal from a monitoring process that monitors the status of all nodes 110A-L in distributed system 100. Alternatively, a user may indicate that the selected node is non-functional, or the selected node itself may determine that it is unable to service a request. [0042] In response to receiving an indication that the selected node is non-functional, load balancer 150 executes a selection scheme on all remaining nodes 110A-L in the same cluster 120A-C as the selected node. In 516, load balancer 150 relays the request to the newly selected node. In one embodiment, load balancer 150 may store a tempo rary copy of the request in order to forward the request to the newly selected node in response to an indication of a non-functional node 110A-L. Alternatively, in another embodiment, load balancer 150 may signal client 160A-C to resend the request in response to an indication of a non functional node 110A-L. [0043] It is thus noted that load balancer 150 may take a more active role in sticky load balancing at the time of failover. It is additionally noted that, in one embodiment, load balancer 150 may modify cluster mapping table 310 and session mapping table 320 upon detection or noti?cation that a selected node is non-functional, thereby removing any current sessions from the failed node, and preventing any other sessions from being started on the failed node until the node is repaired. [0044] For example, load balancer 150 may receive noti?cation that node 110K in cluster 120C is non-functional. In response, load balancer 150 may search through session mapping table 320 for all sessions associated With node 110K, and reassign those sessions to other nodes in cluster 120C. It is noted that since all nodes in cluster 120C share a common data source 130C, any node in cluster 120C should be able to continue sessions started by node 110K, thus providing a uni?ed system image of the cluster to any clients. In addition, load balancer 150 may modify cluster mapping table 310 to indicate that no further sessions should be moved to node 110K until it is repaired. [0045] Turning now to FIG. 6, an exemplary computer subsystem 600 is shown. Computer subsystem 600 includes main memory 620, Which is coupled to multiple processors 610A-B, and I/O interface 630. It is noted that the number of processors is purely illustrative, and that one or more processors may be resident on the node. I/O interface 630 further connects to network interface 640. Such a system is exemplary of a load balancer, a node in a cluster or any other kind of computing node in a distributed system.

US 2004/0243709 A1 Dec. 2, 2004 [0046] Processors 610A-B may be representative of any of various types of processors such as an x86 processor, a PoWerPC processor or a CPU from the SPARC family of RISC processors. Likewise, main memory 620 may be representative of any of various types of memory, including DRAM, SRAM, EDO RAM, Rambus RAM, etc., or a non-volatile memory such as a magnetic media, e.g., a hard drive, or optical storage. It is noted that in other embodi ments, main memory 600 may include other types of suit able memory as Well, or combinations of the memories mentioned above. [0047] Processors 610A-B of computer subsystem 600 may execute software con?gured to execute a method of sticky load balancing for clusters in a distributed system, as described in detail above in conjunction With FIGS. 1-5. The software may be stored in memory 620 of computer sub system 600 in the form of instructions and/or data that implement the operations described above. [0048] For example, FIG. 6 illustrates an exemplary load balancer 150 stored in main memory 620. The instructions and/or data that comprise central controller 130 and any components contained therein may be executed on one or more of processors 610A-B, thereby implementing the vari ous functionalities of load balancer 150 described above. [0049] In addition, other components not pictured such as a display, keyboard, mouse, or trackball, for example may be added to node 110. These additions Would make node 110 exemplary of a Wide variety of computer systems, such as a laptop, desktop, or Workstation, any of Which could be used in place of node 110. [0050] Various embodiments may further include receiv ing, sending or storing instructions and/or data that imple ment the operations described above in conjunction With FIGS. 1-5 upon a computer readable medium. Generally speaking, a computer readable medium may include storage media or memory media such as magnetic or optical media, eg disk or CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc. as Well as transmission media or signals such as electrical, electromagnetic, or digital signals con veyed via a communication medium such as network and/or a Wireless link. [0051] Although the embodiments above have been described in considerable detail, numerous variations and modi?cations Will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modi?cations. What is claimed is: 1. A method, comprising: receiving an initial request from a client, Wherein the initial request initiates a session; relaying the initial request to a selected node, Wherein the selected node is part of a cluster of multiple nodes, and Wherein the cluster is one of a plurality of clusters in a distributed system; receiving a subsequent request, Wherein the subsequent request pertains to the same session; determining if the selected node is active; if the selected node is active, relaying the subsequent request to the selected node; and if the selected node is not active: determining the identity of selected node s cluster; and relaying the subsequent request to another node in the same cluster as the selected node. 2. The method of claim 1, further comprising maintaining a session mapping table for each cluster, Wherein each session mapping table indicates Which nodes of its respec tive cluster in the distributed system are associated With Which sessions. 3. The method of claim 1, Wherein said determining the identity of selected node s cluster comprises examining a cluster mapping table, Wherein the cluster mapping table indicates Which nodes are associated With Which clusters. 4. The method of claim 1, Wherein said determining if the selected node is active comprises employing a heartbeat method. 5. The method of claim 1, Wherein said determining if the selected node is active comprises tracking the selected node s response to a previous request. 6. The method of claim 1, further comprising determining if a request pertains to an existing session through the use of session information provided With the request. 7. The method of claim 6, Wherein the session information comprises a session identi?cation number. 8. The method of claim 6, Wherein said relaying the subsequent request to the selected node comprises using the session information to identify the selected node. 9. The method of claim 8, Wherein the session information comprises the selected node s identi?cation number or the selected node s network address. 10. The method of claim 6, Wherein the session informa tion is provided to the client in the form of a cookie in response to the initial request. 11. The method of claim 1, Wherein the selected node is initially selected by a round-robin selection scheme, a pri ority-based selection scheme, a load-tracking selection scheme, or a combination thereof. 12. A distributed system, comprising: a plurality of clusters, Wherein each cluster comprises a plurality of nodes; a load balancer, operable to relay requests to individual clusters or nodes, Wherein the load balancer is con?g ured to: receive an initial request from a client, Wherein the initial request initiates a session; relay the initial request to a selected node; receive a subsequent request, Wherein the subsequent request pertains to the same session; determine if the selected node is active; relay the subsequent request to the selected node if the selected node is active; and determine the identity of selected node s cluster and relay the subsequent request to another node in the same cluster as the selected node if the selected node is not active. 13. The distributed system of claim 12, Wherein the load balancer is con?gured to access a session mapping table,

US 2004/0243709 A1 Dec. 2, 2004 wherein the session mapping table indicates Which nodes in the distributed system are associated With Which sessions. 14. The distributed system of claim 12 Wherein the load balancer is further operable to examine a cluster mapping table to determine Which nodes are associated With Which clusters. 15. The distributed system of claim 12 Wherein the load balancer is con?gured to employ a heartbeat method to determine if the selected node is active. 16. The distributed system of claim 12 Wherein the load balancer is con?gured to track the selected node s response to a previous request to determine if the selected node is active. 17. The distributed system of claim 12 Wherein the load balancer is con?gured to determine if a request pertains to an existing session through the use of session information provided With the request. 18. The distributed system of claim 17, Wherein the session information comprises a session identi?cation num ber. 19. The distributed system of claim 17 Wherein the load balancer is con?gured to relay the subsequent request to the selected node comprises using the session information to identify the selected node. 20. The distributed system of claim 19, Wherein the session information comprises the selected node s identi? cation number or the initial node s network address. 21. The method of claim 17, Wherein the session infor mation is given to the client in the form of a cookie. 22. The method of claim 12 Wherein the load balancer is con?gured to initially select the selected node by a round robin selection scheme, a priority-based selection scheme, a load-tracking selection scheme, or a combination thereof. 23. A computer accessible medium comprising program instructions executable to implement: receiving an initial request from a client, Wherein the initial request initiates a session; relaying the initial request to a selected node, Wherein the selected node is part of a cluster of multiple nodes, and Wherein the cluster is one of a plurality of clusters in a distributed system; receiving a subsequent request, Wherein the subsequent request pertains to the same session; determining if the selected node is active; if the selected node is active, relaying the subsequent request to the selected node; and if the selected node is not active: determining the identity of selected node s cluster; and relaying the subsequent request to another node in the same cluster as the selected node. 24. The computer accessible medium of claim 23, Wherein the program instructions are further executable to implement maintaining a session mapping table for each cluster, Wherein each session mapping table indicates Which nodes of its respective cluster in the distributed system are associated With Which sessions. 25. The computer accessible medium of claim 23, Wherein said determining the identity of selected node s cluster comprises examining a cluster mapping table, Wherein the cluster mapping table indicates Which nodes are associated With Which clusters. 26. The computer accessible medium of claim 23, Wherein said determining if the selected node is active comprises employing a heartbeat method. 27. The computer accessible medium of claim 23, Wherein said determining if the selected node is active comprises tracking the selected node s response to a previ ous request. 28. The computer accessible medium of claim 23, Wherein the program instructions are further executable to implement determining if a request pertains to an existing session through the use of session information provided With the request. 29. The computer accessible medium of claim 28, Wherein the session information comprises a session iden ti?cation number. 30. The computer accessible medium of claim 28, Wherein said relaying the subsequent request to the selected node comprises using the session information to identify the selected node. 31. The computer accessible medium of claim 30, Wherein the session information comprises the selected node s identi?cation number or the selected node s network address. 32. The computer accessible medium of claim 28, Wherein the session information is provided to the client in the form of a cookie in response to the initial request. 33. The computer accessible medium of claim 23, Wherein the selected node is initially selected by a round robin selection scheme, a priority-based selection scheme, a load-tracking selection scheme, or a combination thereof.