LBPerf: An Open Toolkit to Empirically Evaluate the Quality of Service of Middleware Load Balancing Services



Similar documents
Evaluating the Performance of Middleware Load Balancing Strategies

The Design and Performance of an Adaptive CORBA Load Balancing Service

Performance Prediction, Sizing and Capacity Planning for Distributed E-Commerce Applications

The Design of an Adaptive CORBA Load Balancing Service

The Design of an Adaptive Middleware Load Balancing and Monitoring Service

Netowork QoS Provisioning Engine for Distributed Real-time and Embedded Systems

Performance And Scalability In Oracle9i And SQL Server 2000

Holistic Performance Analysis of J2EE Applications

Proactive, Resource-Aware, Tunable Real-time Fault-tolerant Middleware

Recommendations for Performance Benchmarking

How To Test For Elulla

SOFT 437. Software Performance Analysis. Ch 5:Web Applications and Other Distributed Systems

Case Study - I. Industry: Social Networking Website Technology : J2EE AJAX, Spring, MySQL, Weblogic, Windows Server 2008.

Web Application s Performance Testing

Performance Testing Process A Whitepaper

An Approach to Load Balancing In Cloud Computing

Introducing Performance Engineering by means of Tools and Practical Exercises

Network Infrastructure Services CS848 Project

Directions for VMware Ready Testing for Application Software

A Middleware Strategy to Survive Compute Peak Loads in Cloud

Informatica Data Director Performance

Performance Analysis of Web based Applications on Single and Multi Core Servers

DELL s Oracle Database Advisor

Scaling Database Performance in Azure

Performance Management for Cloudbased STC 2012

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

VDI FIT and VDI UX: Composite Metrics Track Good, Fair, Poor Desktop Performance

ON THE IMPLEMENTATION OF ADAPTIVE FLOW MEASUREMENT IN THE SDN-ENABLED NETWORK: A PROTOTYPE

Characterizing Task Usage Shapes in Google s Compute Clusters

Delivering Quality in Software Performance and Scalability Testing

5 Performance Management for Web Services. Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology.

Scalability and BMC Remedy Action Request System TECHNICAL WHITE PAPER

Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2009

Multilevel Communication Aware Approach for Load Balancing

White Paper. How to Achieve Best-in-Class Performance Monitoring for Distributed Java Applications

Performance Testing Oracle SOA Platform and Services

Business case for VoIP Readiness Network Assessment

Monitoring Infrastructure (MIS) Software Architecture Document. Version 1.1

Testing & Assuring Mobile End User Experience Before Production. Neotys

Performance tuning policies for application level fault tolerance in distributed object systems

STeP-IN SUMMIT June 18 21, 2013 at Bangalore, INDIA. Performance Testing of an IAAS Cloud Software (A CloudStack Use Case)

A Comparison of Oracle Performance on Physical and VMware Servers

Rapid Bottleneck Identification

CloudCmp:Comparing Cloud Providers. Raja Abhinay Moparthi

OPERATING SYSTEMS SCHEDULING

Proposal of Dynamic Load Balancing Algorithm in Grid System

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)

Load Testing Analysis Services Gerhard Brückl

Load Testing on Web Application using Automated Testing Tool: Load Complete

Software Performance Testing

There are a number of factors that increase the risk of performance problems in complex computer and software systems, such as e-commerce systems.

Monitoring Cloud Applications. Amit Pathak

An Oracle White Paper February Rapid Bottleneck Identification - A Better Way to do Load Testing

Performance Management for Cloud-based Applications STC 2012

Performance Testing Percy Pari Salas

TPC-W * : Benchmarking An Ecommerce Solution By Wayne D. Smith, Intel Corporation Revision 1.2

Virtuoso and Database Scalability

Table of Contents INTRODUCTION Prerequisites... 3 Audience... 3 Report Metrics... 3

Performance Tuning and Optimizing SQL Databases 2016

Performance Test Report For OpenCRM. Submitted By: Softsmith Infotech.

MS SQL Performance (Tuning) Best Practices:

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

WEBAPP PATTERN FOR APACHE TOMCAT - USER GUIDE

Tableau Server 7.0 scalability

Manjrasoft Market Oriented Cloud Computing Platform

VALAR: A BENCHMARK SUITE TO STUDY THE DYNAMIC BEHAVIOR OF HETEROGENEOUS SYSTEMS

PERFORMANCE AND SCALABILITY

Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition

PERFORMANCE AND SCALABILITY

Load Manager Administrator s Guide For other guides in this document set, go to the Document Center

Kronos Workforce Central on VMware Virtual Infrastructure

Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at

The Service Availability Forum Specification for High Availability Middleware

AppDynamics Lite Performance Benchmark. For KonaKart E-commerce Server (Tomcat/JSP/Struts)

Scalability and Performance Report - Analyzer 2007

CHAPTER 3 PROBLEM STATEMENT AND RESEARCH METHODOLOGY

Capacity Plan. Template. Version X.x October 11, 2012

CloudCenter Full Lifecycle Management. An application-defined approach to deploying and managing applications in any datacenter or cloud environment

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

An Intelligent Approach for Integrity of Heterogeneous and Distributed Databases Systems based on Mobile Agents

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

Aras Innovator 10 Scalability Benchmark Methodology and Performance Results

Monitoring Oracle Enterprise Performance Management System Release Deployments from Oracle Enterprise Manager 12c

Transcription:

LBPerf: An Open Toolkit to Empirically Evaluate the Quality of Service of Middleware Load Balancing Services Ossama Othman Jaiganesh Balasubramanian Dr. Douglas C. Schmidt {jai, ossama, schmidt}@dre.vanderbilt.edu Department of Electrical Engineering and Computer Science, Nashville 7 July 2004

Distributed Systems Typical issues with distributed systems Heterogeneous environments Concurrency Large bursts of client requests 24/7 availability Stringent QoS requirements Examples of distributed systems E-commerce Online trading systems Mission critical systems

Motivation Development and maintenance of QoS-enabled distributed systems Non-trivial Requires expertise that application developers often lack Solution: Middleware (e.g. CORBA) Can shield distributed system developers from the complexities involved with developing distributed applications Can facilitate manipulation of QoS requirements and management of resources

Load Balancing Load balancing can improve the efficiency and scalability of a distributed system Load balancing service can be implemented in the following layers Network layer OS layer Middleware layer Why choose middleware-layer load balancing? Can take into account distributed system state Can take into account the system run-time behavior Application level control over load balancing policies Can take into account request content

Members Load Balancer Common Deployment Scenario Clients Object Groups Requests Replies Multiple clients making request invocations Potentially non-deterministic Duration called a session Members Multiple instances of the same object implementation Object groups Collections of members among which loads will be distributed equitably Logically a single object Load balancer Transparently distributes requests to members within an object group

Common Load Balancing Tasks Manage multiple object groups Groups may be modified at run-time Bind clients to servers (group members) Select servers based on balancing strategy configured for given object group Query and analyze loads Pulled Loads retrieved from monitoring object Pushed Load pushed to load balancer from monitoring object Rebalance loads across group members Rebind clients to other servers

Inter-task Affects Each of the common load balancing tasks can affect the performance of the others Client binding bursts can degrade load rebalancing performance High frequency load reporting can degrade client binding responsiveness Costly load balancing strategies can consume resources needed to respond to object group membership changes Similarly for other inter-task combinations Execution of some tasks may starve others

Overall Performance Evaluation Considerations Each of the load balancing tasks may execute nondeterministically Non-determinism should be reproduced to accurately reflect true deployed run-time behavior Some tasks may be less critical than others Give less weight to less performance critical tasks How do different application workloads affect load balancing performance Different types of workloads incur different behavior and responsiveness from the load balancer

Evaluating Load Balancing Strategies Load balancing strategies can be either adaptive or non-adaptive Load balancing strategies can employ various run time parameters and load metrics Strategies may perform differently under different parameter configurations Determining the appropriate load balancing strategies for different classes of distributed applications is hard without the guidance of comprehensive performance evaluation models, systematic benchmarks and empirical results

Introducing LBPerf LBPerf is an open source benchmarking tool suite for evaluating middleware load balancing strategies Supports range of adaptive and non-adaptive load balancing strategies Tune different configurations of middleware load balancing, including choosing different load balancing strategies and runtime parameters associated with those strategies Evaluate strategies using different metrics like throughput, latency, CPU utilization

Workload Characterization (1/2) Empirical evaluations not useful unless the workload is representative of the context in which the load balancer is deployed Workload model could be any of the following: Closed analytical network models Simulation models Executable models Our benchmarking experiments are based on the executable workload models

Workload Characterization (2/2) Executable models can be classified as: Resource type Characterized by the type of resource being consumed Service type Characterized by the type of service being performed on behalf of the clients Session type Characterized by the type of requests initiated by a client and serviced by a server in the context of a session Our benchmarking experiments focused on generating session type workloads

Focus on load balancing behavior under different types of workloads Single threaded clients generating CPU intensive requests on the servers Experiments repeated for different strategies Round Robin Random Load Minimum Least Loaded Measurements Throughput The number of requests processed per second by the server CPU utilization CPU usage percentage Experiment

Run-time configurations Load reporting interval The time between successive load information updates from the server location to the load balancer Reject threshold value of the Least Loaded strategy The load at which the servers will stop receiving additional requests Critical threshold value of the Least Loaded strategy The load at which the servers will start shedding loads Migration threshold value of the LoadMinimum strategy The difference between the most loaded machine and the least loaded machine to trigger a migration of load from the most loaded machine to the least loaded machine Dampening value The fraction of the newly reported load that will be considered for fresh load balancing decisions

Results Analysis Adaptive load balancing strategies generally perform better than non-adaptive load balancing strategies in the presence of non-uniform loads Least Loaded strategy is better than the other three strategies under such loading conditions Reducing the number of client migrations is necessary for achieving maximum performance Client session migrations are not always effective Need to maintain system utilization to enable more predictive behavior of the strategies

Future Work Extend LBPerf to evaluate other types of workloads Use the empirical results as a learning process to understand the nuances of the different run-time behaviors of adaptive load balancing strategies Use observations from the learning process to train an operational phase by developing self-adaptive load balancing strategies that can dynamically tune the run-time parameters according to the load being experienced Implement non-deterministic traffic generator Client requests Load reports Membership changes

Concluding Remarks While load balancing can improve distributed application performance significantly, determining the optimal load balancing configuration is non-trivial Different load balancing strategies may behave worse than others under different types of workloads Similar behavior may be observed when utilizing different load metrics Improperly tuned load balancing strategy parameters can reduce effectiveness of the strategy, thus having a negative impact on performance and overall system scalability Overall load balancer scalability and responsiveness may be degraded as the different tasks performed by a load balancer are executed

References Ossama Othman, Jaiganesh Balasubramanian, Douglas C. Schmidt The Design of an Adaptive Load Balancing and Monitoring Service http://www.dre.vanderbilt.edu/~ossama/papers/pdf/iwsas_2003.pdf Jaiganesh Balasubramanian, Ossama Othman, Douglas C. Schmidt Evaluating the performance of middleware load balancing strategies http://www.dre.vanderbilt.edu/~jai/edoc_2004.pdf Download Cygnus, LBPerf and TAO from http://deuce.doc.wustl.edu/download.html