MuleSoft Blueprint: Load Balancing Mule for Scalability and Availability



Similar documents
CERTIFIED MULESOFT DEVELOPER EXAM. Preparation Guide

GlassFish ESB Resilience Options

LinuxWorld Conference & Expo Server Farms and XML Web Services

AquaLogic Service Bus

BASICS OF SCALING: LOAD BALANCERS

Chapter 10: Scalability

How To Integrate With An Enterprise Service Bus (Esb)

ADAM: A Case Study in Network Marketing Scalability

DEPLOYMENT GUIDE Version 1.0. Deploying the BIG-IP LTM with Microsoft Windows Server 2008 R2 Remote Desktop Services

Siemens PLM Connection. Mark Ludwig

Deploying F5 with Microsoft Active Directory Federation Services

Introducing the Microsoft IIS deployment guide

LoadBalancer and Exchange 2013

DEPLOYMENT GUIDE. Deploying the BIG-IP LTM v9.x with Microsoft Windows Server 2008 Terminal Services

Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework

DEPLOYMENT GUIDE Version 1.2. Deploying F5 with Microsoft Exchange Server 2007

Application Note. Lync 2010 deployment guide. Document version: v1.2 Last update: 12th December 2013 Lync server: 2010 ALOHA version: 5.

Understanding IBM Lotus Domino server clustering

Jitterbit Technical Overview : Microsoft Dynamics CRM

Best Practices for Implementing High Availability for SAS 9.4

Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.

Network File System (NFS) Pradipta De

Oracle Service Bus. Situation. Oracle Service Bus Primer. Product History and Evolution. Positioning. Usage Scenario

Deploying F5 with Microsoft Remote Desktop Session Host Servers

Load Balancing. Final Network Exam LSNAT. Sommaire. How works a "traditional" NAT? Un article de Le wiki des TPs RSM.

DEPLOYMENT GUIDE DEPLOYING THE BIG-IP LTM SYSTEM WITH MICROSOFT WINDOWS SERVER 2008 TERMINAL SERVICES

This presentation covers virtual application shared services supplied with IBM Workload Deployer version 3.1.

DEPLOYMENT GUIDE. Deploying F5 for High Availability and Scalability of Microsoft Dynamics 4.0

Resource Utilization of Middleware Components in Embedded Systems

Load Balancing Web Applications

How To Balance A Load Balancer On A Server On A Linux (Or Ipa) (Or Ahem) (For Ahem/Netnet) (On A Linux) (Permanent) (Netnet/Netlan) (Un

Chapter 1 - Web Server Management and Cluster Topology

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

Oracle BI Publisher Enterprise Cluster Deployment. An Oracle White Paper August 2007

Manjrasoft Market Oriented Cloud Computing Platform

TIBCO ActiveSpaces Use Cases How in-memory computing supercharges your infrastructure

High Performance Cluster Support for NLB on Window

RemoteApp Publishing on AWS

Oracle Collaboration Suite

Load Balancing and Sessions. C. Kopparapu, Load Balancing Servers, Firewalls and Caches. Wiley, 2002.

Running a Workflow on a PowerCenter Grid

CS 188/219. Scalable Internet Services Andrew Mutz October 8, 2015

Deploying the BIG-IP System for LDAP Traffic Management

Ignify ecommerce. Item Requirements Notes

Deploying F5 with Microsoft Remote Desktop Session Host Servers

Using Oracle Real Application Clusters (RAC)

Jive and High-Availability

Fax Server Cluster Configuration

Jive and High-Availability

Load Balancing BEA WebLogic Servers with F5 Networks BIG-IP v9

Listeners. Formats. Free Form. Formatted

Architecting for the cloud designing for scalability in cloud-based applications

QLIKVIEW SERVER MEMORY MANAGEMENT AND CPU UTILIZATION

ActiveVOS Server Architecture. March 2009

Big Data Analytics - Accelerated. stream-horizon.com

Overview - Using ADAMS With a Firewall

Mule Enterprise Service Bus (ESB) Hosting

ZooKeeper. Table of contents

New Features in SANsymphony -V10 Storage Virtualization Software

Overview - Using ADAMS With a Firewall

DEPLOYMENT GUIDE DEPLOYING F5 WITH MICROSOFT WINDOWS SERVER 2008

SECURE, ENTERPRISE FILE SYNC AND SHARE WITH EMC SYNCPLICITY UTILIZING EMC ISILON, EMC ATMOS, AND EMC VNX

VMware vrealize Automation

Request for Information (RFI) Supply of information on an Enterprise Integration Solution to CSIR

Jitterbit Technical Overview : Salesforce

VMware vrealize Automation

Application Delivery Controller (ADC) Implementation Load Balancing Microsoft SharePoint Servers Solution Guide

Introducing the BIG-IP and SharePoint Portal Server 2003 configuration

<Insert Picture Here> WebLogic High Availability Infrastructure WebLogic Server 11gR1 Labs

Single Pass Load Balancing with Session Persistence in IPv6 Network. C. J. (Charlie) Liu Network Operations Charter Communications

A Standard Modest WebSite

Qualogy M. Schildmeijer. Whitepaper Oracle Exalogic FMW Optimization

StreamServe Persuasion SP5 StreamStudio

IBM Global Technology Services September NAS systems scale out to meet growing storage demand.

Deploying the BIG-IP LTM v10 with Microsoft Lync Server 2010 and 2013

Using DataDirect Connect for JDBC with Oracle Real Application Clusters (RAC)

Parallels. Clustering in Virtuozzo-Based Systems

Code:1Z Titre: Oracle WebLogic. Version: Demo. Server 12c Essentials.

Building a Highly Available and Scalable Web Farm

VMware vcloud Automation Center 6.1

Building a Scale-Out SQL Server 2008 Reporting Services Farm

AppDirector Load balancing IBM Websphere and AppXcel

Oracle Service Bus: - When to use, where to use and when not to use

Building a Reliable Messaging Infrastructure with Apache ActiveMQ

Alfresco Enterprise on AWS: Reference Architecture

Discuss the new server architecture in Exchange Discuss the Client Access server role. Discuss the Mailbox server role

Windows Server 2012 R2 Hyper-V: Designing for the Real World

Jitterbit Technical Overview : Microsoft Dynamics AX

BUILDING HIGH-AVAILABILITY SERVICES IN JAVA

Owner of the content within this article is Written by Marc Grote

ArcGIS for Server Reference Implementations. An ArcGIS Server s architecture tour

Load Balancing for Microsoft Office Communication Server 2007 Release 2

Radware s AppDirector and AppXcel An Application Delivery solution for applications developed over BEA s Weblogic

Universal Event Monitor for SOA Reference Guide

SOA Software: Troubleshooting Guide for Agents

Deploying ACLs to Manage Network Security

Apigee Gateway Specifications

SharePoint 2013 on Windows Azure Infrastructure David Aiken & Dan Wesley Version 1.0

Understanding Slow Start

High-Availability, Fault Tolerance, and Resource Oriented Computing

Transcription:

MuleSoft Blueprint: Load Balancing Mule for Scalability and Availability Introduction Integration applications almost always have requirements dictating high availability and scalability. In this Blueprint we ll learn how Mule applications can be implemented to meet such requirements. We ll start off by looking at trivial load balancing approaches, using TCP load balancers and JMS. We ll then dig into Mule s HA Clustering, which facilitates high availability and load sharing for Mule applications. In the course of this discussion we ll also see how Mule s support for VM queueing leads to additional opportunities to introduce reliability into your integration applications. Developing Applications for Scalability and High Availability Certain considerations must be taken when developing an application that needs to be deployed for high availability. The primary concern when developing an application is determining what state, if any, must be shared so that it is available for all nodes. The classic example of shared state in a web application is an HTTP session. Typically a session cookie is used on the client side to associate a user with particular data stored on the server. For a single node this session data can be stored in memory, but in a load balanced environment the data must be stored in a manner that enables each node to access it. Similar examples exist in integration applications. For example, consider a Mule application that must poll a shared NFS directory using the File transport. This application will function properly as long as its only running a single Mule server. Adding more Mule servers, however, produces a race condition as each server attempts to process the same file simultaneously.

A similar condition may exist with the idempotent-message-filter. The idempotent-message-filter keeps a record of messages accepted by the filter and rejects those that have already been passed. By default Mule stores the state of the idempotent filter on the filesystem local to the Mule server. This state, the list of messages processed by the filter, must be shared with other Mule nodes as they are added to the environment. BEST PRACTICE: Strive to make your flows stateless whenever possible. The most effective way to avoid problems of shared state is to write your Mule flows so that they are stateless. Flows that do require shared state should be kept minimal. In addition to avoiding the concurrency issues described in the previous section, this approach will also optimize the performance of your applications since stateless flows tend to perform better than stateful flows. Horizontally Scaling Stateless Mule Applications with Load Balancing Load balancing provides the easiest mechanism to achieve high availability and scaling with Mule applications. It distributes traffic across multiple Mule nodes running the same Mule application. Most load balancing mechanisms have a variety of algorithms to choose from. Stateless algorithms, like round-robin, simply alternate traffic between nodes. More complex session-based algorithms use a token, like an HTTP session cookie, to pin a client to a particular node or group of nodes. Load Balancing TCP Based Transports TCP based transports, such as HTTP, are usually load balanced with either a hardware or software load balancer. Hardware load balancers, such F5 s BIG-IP, or software load balancers, such as HAProxy, are typically used to load balance traffic across a farm of Mule nodes.

Using TCP based load balancing is usually trivial with Mule. The load balancer is typically based in front of the Mule nodes. BEST PRACTICE: Use load balancing without HA clustering when you know your Mule flows are stateless Load Balancing with JMS Many JMS brokers, such as ActiveMQ and HornetQ, support competing consumption of JMS queues. This allows multiple clients to consume off the same JMS queue while the JMS broker distributes messages equally to each. This behavior enables the JMS broker to have a load balancing approach that is similar to the TCP-based one described in the previous section. Additional flexibility can be achieved by using JMS reliability headers in conjunction with competitive consumption. This facilitates the weighting of messages to control which are given priority for processing, a feature not typically possible with TCP based protocols. NOTE: In order to take advantage of QoS headers ensure that the honorqosheaders attribute is set to true in your Mule configuration. BEST PRACTICE: Check your JMS brokers documentation to confirm the behavior of multiple consumers on a single queue. BEST PRACTICE: Use JMS reliability headers in conjunction with competing consumption to achieve QoS in a load balanced environment

Using Mule HA Clusters to Distribute State Horizontal load balancing is a powerful technique for making your Mule applications web scale, at the cost of not maintaining state between the nodes. There are, however, times when sharing states is unavoidable. Mule HA Clustering can be used when these situations arise. The use of any of the following features in your Mule application likely indicates you have a requirement for shared state: The file transport The sftp/ftp transport JMS topics Quartz scheduling Message sequencing The use of the request-reply exchange pattern on asynchronous transports (ie, JMS or VM.) The idempotent-message-filter The caching scope Any sort of polling of a shared resource The use of redelivery policies All Mule features, including message processors and aggregation, become cluster aware when deployed in a cluster. This is automatic and doesn t require intervention from the developer or changes during application development. Mule Clusters function just like a single server. When two or more Mule servers are joined in a cluster, they create a common memory space that serves as the foundation for distributed primitives like queues, collections and semaphores. Through the use of these primitives, the nodes in a cluster can create shared message stores to synchronize state, shared queues to balance load, and semaphores to synchronize access to common resources. All communication and synchronization happens automatically and in the background, so any Mule application can become a clustered application just by being deployed in a cluster of Mule servers.

NOTE: Using Mule Cluster in conjunction with TCP based transports like HTTP still requires the use of a front-end load balancer to redirect traffic across the cluster.

Decomposing Flows for Reliability with the VM Transport A reliability pattern is a design that results in reliable messaging for an application even if the application receives messages from a non-transactional transport. A reliability pattern couples a reliable acquisition flow with an application logic flow, as shown in the following diagram. The reliable acquisition flow (that is, the left-hand part of the diagram) delivers a message reliably from an inbound endpoint to an outbound endpoint, even though the inbound endpoint is for a non-transactional transport. The outbound endpoint can be any type of transactional endpoint such as VM or JMS. If the reliable acquisition flow cannot deliver the message, it ensures that the message isn't lost: For socket-based transports like HTTP, this means returning an "unsuccessful request" response to the client so that the client can retry the request. For resource-based transports like File or FTP, it means not deleting the file, so that it can be reprocessed. The application logic flow (that is, the right-hand side of the diagram) delivers the message from the inbound endpoint (which uses a transactional transport) to the business logic for the application. The following transports can be used to decouple flows for reliability in a Mule HA Cluster:

VM JDBC Persistent JMS BEST PRACTICE: Using the VM transport in an HA cluster is usually the easiest way to implement reliable acquisition in a flow. NOTE: Complete information about the implementation of Reliability Patterns can be found here: http://www.mulesoft.org/documentation/display/mmc/reliability+patterns Load Sharing with the VM Transport in a Mule Cluster Messages passed on a VM queue in a Mule cluster will be automatically load balanced to receiving flows. This makes the VM transport an attractive alternative to JMS queuing when a Mule cluster in available. The following benefits can be realized: No additional operational infrastructure is required. VM queues are typically order of magnitudes faster than JMS queues Clustering of a JMS infrastructure for high availability isn t necessary The following figure illustrates workload sharing in more detail. Both nodes process messages related to order fulfillment. However, when one node is heavily loaded, it can move the processing for one or more steps in the process to another node. Here, processing of the Process order discount step is moved to Node 1, and processing of the Fulfill order step is moved to Node 2.

Conclusion In this Blueprint we learned how to develop Mule applications for scalability and availability. First we looked at load balancing, which provides an easy way to achieve horizontal scalability when your flows don t require shared state. When shared state is a requirement, which is frequently the case in many integration scenarios, then horizontal load balancing becomes a challenge because the applications state must be synchronized on an external resource, typically a database or messaging system. Mule HA clustering greatly simplifies this requirement by transparently providing a distributed memory grid to automatically share state between Mule instances. We saw how this feature can be further leveraged to implement reliability patterns and dynamic load sharing using VM queues. Further information about load balancing Mule can be found on our online documentation or by contacting MuleSoft Services.