Introduction to Analytical Modeling
|
|
- Lindsay Shelton
- 8 years ago
- Views:
Transcription
1 Introduction to Analytical Modeling Gregory V. Caliri BMC Software, Inc. Waltham MA USA ABSTRACT Analytical models are constructed and used by capacity planners to predict computing resource requirements related to workload behavior, content, and volume changes, and to measure effects of hardware and software changes. Developing the analytical model provides the capacity planner with an opportunity to study and understand the various behavior patterns of work and hardware that currently exist. Certain factors must be taken into consideration to avoid common errors in model construction, analysis, and predictions. Definition of Analytical Modeling What is an analytical model? By pure definition and in terms of being applied to computer systems, it is a set of equations describing the performance of a computer system 1. In practical terms, it describes a collection of measured and calculated behaviors of different elements over a finite period of time within the computer system workloads, hardware, software, and the CPU itself, and can even include the actions and behaviors of its users and support personnel. In most instances, the capacity planner constructs the model using activity measurement information generated and collected during one or more time intervals. It is critical that an interval or series of intervals be used that contain significant volumes of business-critical activity. Units of work are then characterized by type and grouped into workloads. The capacity analyst can then translate future business requirements into measurable units of computing resource consumption, and calculate capacity and performance projections for workloads. Purposes for building Analytical models Some users will construct analytical models to merely gain an understanding of the current activity on the system and to measure performance and analyze behavior of the workloads and hardware within it. Others will use them as a basis for prediction of behavior of certain elements of work within a system by inputting changes to different components of the system; one might include changes to faster or slower hardware, configuration changes, increased or decreased or altered workload arrival patterns. changes. Some will even carry the use of an analytical model beyond entering changes to the current system or set of systems and use it as input to a second model so as to measure the effects of the combination of two existing systems. For most sites, the projection of capacity requirements and future performance are the objectives behind the capacity planning effort. In these "what-if" analysis situations, the capacity planner follows a several step process consisting of the following steps: - Receives projections for future business computing requirements - Translates those business requirements into data processing resource requirements based on the information contained in the model, and other sources, if the model does not contain sufficient workloads with characteristics meeting those requirements - Calculates the status of the system after the new workload requirements have been input - Reports results to management, listing any available options. Starting off If you've never engaged in capacity planning, implementing the process in your enterprise. Very simply: - Define and identify the purpose(s) for your modeling study
2 - Ensure that sufficient collection mechanisms and analytical tools are available to support model construction and analysis - Characterize workloads according to a set of rules and definitions - Identify the intervals of time and critical workloads for study - Accept input and business requirements from your user community - Establish a standard method to report results back to management Let's review each of these steps. Define the purpose for the modeling study It is important to define exactly what the purpose is for building models and what their specific uses will be. Most will use the model to execute a series of "whatif" changes to the environment by making alterations to the analytical model -- workload volume increases or decreases, hardware changes, or addition of new users and transactions. Performance results are then measured. As a parallel function, an analytical model can be used to model changes to the existing environment that will allow the analyst to tune the system for improved performance. Refining the objective for the use of the model can also serve to streamline the process. For instance, are we only concerned about CPU capacity? Must we control response time of certain mission-critical workloads? Detailed modeling of database changes? Will you be analyzing and tuning for typically heavy use periods, or only doing so for peak periods? Each scenario listed would entail different levels of data collection and varying complexities in workload characterization. Of course, if an analytical model has reduced detail and very coarse granularity in its components, it will not be as flexible and will not be able to be used to return specific esoteric results. Establishing the purpose(s) for the modeling study will affect the total approach that is taken to model construction, characterization of workloads, and series of analytical iterations to be performed with the model. Definition of the modeling goal will also lead to increased confidence in the results of the study. Data collection and retention; model construction A data collection routine must be designed and implemented, and the data collected must be robust enough so that appropriate records are available to identify all components and all pertinent workload activity. On an OS/390 2 system, this would include all SMF 3 job and task related records (type 30s), all pertinent RMF 4 records for configuration, hardware activity, workload activity (types 70-75, and type 78 records if collected), and all appropriate database and online activity monitor records (IMS DC Monitor 5, DB2 6 related SMF, IMF 7 etc.). Some sites will find it impossible to generate, collect, and archive data with extreme granularity for extended periods of time. In these instances, it is recommended that prime intervals for modeling be identified early in the process and that the data is kept from these periods of time, even if certain monitoring instrumentation mechanisms have to be deployed. Similar, but less detailed data collection mechanisms exist on UNIX systems. Often the analyst must execute series of UNIX commands, collect the output from those commands and later generate reports and input for modeling from that output. There are several commercially available measurement and capacity planning tools available. These packages provide their own collectors to generate measurement data that will permit creation of an analytical model. Organize and characterize workloads according to a set of rules and definitions This is probably the most difficult task because it is highly subjective. As with other steps, errors made here can be carried forward through the process and cause improper results. To begin workload characterization, you must study all units of work in the enterprise at an extremely granular and low level. This will give the capacity planner an understanding of system activity and behavior patterns. If data collection was set up properly, this should be possible. Classify work according to its type -- batch, online, TSO, query transactions, long / short transactions, utilities, long and short processes. As part of the previous step, you should have already made computing activity and resource consumption trackable and identifiable. The mission-critical
3 workload definitions should already be roughly established. From this point, begin to classify work and build workloads by the type of work that it is, and do so from a system activity standpoint. In an OS/390 system, batch should be classified as short, long, and "hot" and further grouped as to its service. For instance, production batch serving the business might be placed in one set of workloads, and internal work of some type would be placed in others; online database transactions should be identified and grouped not only as to its production or test role but also by its function. Some will attempt to perform the capacity planning process by classifying work by user communities or account codes. This approach is only valid if the work within each user group or accounting code group is also classified as to the type of work and placed into its own workloads. Erroneous projections are often produced when user counts are employed. This approach assumes that additional users will exhibit the exact behavior and execute work with the same distributions and resource consumption compositions as the existing user community. Identify the intervals of time and critical workloads for study When selecting an appropriate interval of time to measure and input into the construction of analytical models, observe the following: 1) Attempt to select a period of high, but not completely saturated system utilization. 2) Keep in mind your objectives for modeling, and ensure that the model contains all of the critical workloads to be measured and observed. 3) Do not use intervals of time that contain anomalies of activity, such as looping processes, crashed regions, application outages, and other factors that are likely to cause unrealistic measurements. 4) The mix of workloads and their activity will change from one time of day to another. In OS/390 mainframe systems, this is rather common; often there will be a high volume of online, real-time transactional processing during the standard business day and a concentration of batch work during the evening hours. Situations containing the same variances can exist in other platforms as well. In such instances, select two or more intervals for modeling that have different mixes of work and build separate models. The most important rule to follow in area is to ensure that your model contains a robust workload mix and the most critical workloads executing at a significant and typical activity level. A baseline analytical model should come reasonably close to representing a realistic situation. Accept input and business requirements from your user community Obviously, there are many methods of internal communication and sometimes these may be dictated by corporate culture. One suggested method for receiving input from users is to hold a monthly meeting with a representative from each of your user communities. This meeting can be used by the capacity planner to receive input from, and deliver feedback to user groups and explain the current state of the enterprise in plain language. There is also a side benefit to this meeting; different user groups can communicate with each other on upcoming projects. Often duplication of effort is eliminated because two or more groups determine that they are doing the same work, and with a cooperative effort, save system development time and use fewer computing resources. It is also an excellent opportunity to release, distribute and explain the monthly or quarterly performance and capacity plan to users. Establish a standard method to report results back to management Capacity planners often issue a monthly or quarterly report to management. The report should be straightforward, and offer brief explanations of performance results of critical workloads. There should also be a report on the state of the enterprise s capacity, with capacity and performance expectations based on growth projections. Revisions to the capacity plan and the reasons for them should also be included. One mistake often made is the inclusion of too much irrelevant information in reports or presentations. In most cases, upper management personnel do not have the time nor the interest to wade through technical jargon and attempt its translation. Use of visuals can cut through the technological language barrier.
4 Often the capacity planner gets into a quandary he or she has to provide a high level report for management and executives, but may also be challenged by technical personnel to explain the report in technical terms. In such instances, you must have the technical detail available and make it available to those who wish to see it. You will be asked for it at some point in time, and it might be advisable to distribute the high level report and extend an invitation to your audience to read the extended technical report. If actions must be taken, executives often wish to have a variety of viable options and the benefits and consequences of each put before them. Avoid listing only one possible solution to management to solve a problem and refrain from presenting options that are not practically possible to implement. Queuing theory and its role in Analytical Modeling The mathematical basis for many analytical modeling studies is the application of queuing theory. In plain English, it is a mechanism to reflect the length of time that a task waits to receive service and queue length times are calculated based on the speed that a unit providing service (or server, not to be confused with a "file server", etc.) can provide and the number of requests to be processed. If one thinks of a single device for instance, a disk, or a channel, or a CPU as a "server" - the following formula can be applied to determine the average response time for a transaction to be handled at that one service point, or server. This formula is known as "Little's Law". R t = Response time, or the time that the transaction enters the queue until the request is satisfied S = Service time, or the amount of time that the server itself spends handling the request Tx = The number of transactions receiving or awaiting service at any one time The formula: R t = S / (1-(Tx*S)) If Tx*S is equal to or greater than one, then the server is considered to be saturated, as transactions are arriving at the server at a greater rate than the server can handle. To demonstrate this formula, let's assume that a serving CPU can service a request in 50 milliseconds, or.05 second. We can then input transactions per hour, and divide by 3600 to obtain transactions per second. Using the formula, we can input transaction counts and determine where the response time will degrade noticeably, and where the server will saturate. With lower arrival rates, the response time hovers very close to the service time. There is very little queuing taking place for the first transactions per hour. However, when the total is doubled to per hour, the queuing time accelerates to 64 milliseconds, and the transactions are spending more time queued for service than they are actually receiving service. The queuing time rises with a more rapid rate as more transactions are input to the server unit. In the rightmost column, you will note that if the service time were reduced, the queues for transactions would be shorter and the response time would not be noticeable at transactions per hour as they are with 50ms service time. There is also a column listing response time calculations if service time for the transaction were improved and reduced to 30 ms. Analysis of a single server's response time by arrival rate; service time is constant at 50 ms. Trans/ Hr Trans/ Sec Pct. Server busy Response time (service time.05s) Queue Time Response if service time is.03s Saturated Saturated 0.075
5 What should be evident is that there is a definite point where the response time begins to markedly curve upward! Now, one must consider that a process traveling through various points of service in a computing system will have to undergo some type of queuing process at each point. The length of time spent in all of these queues, plus the service time spent at each point of service, comprises the response time for a single process or transaction. Mathematical formulae exist for explanation and calculation of a process in a multi-point system, but they are beyond the scope of an introductory paper. The extended explanation of the above formula and its practical application with multiple points of service can be found at its source; this was extracted from a paper by Dr. Jeffrey Buzen, A Simple Model of Transaction Processing, contained in the 1984 CMG Proceedings. Modeling methodologies other than analytical Two other modeling methods are often used to determine current status of computing systems and to model any changes to them. The first is the use of experimental models. Measuring existing situations or even creating new situations and measuring the performance results performs experimentation and the percentage of used capacity. Benchmark workloads are run on new hardware and/or software environments and true performance measurements are collected. When commercial computing environments were much smaller than they are today, it was relatively easy to simulate an actual business environment. Indeed, the "stress test" was a commonplace occurrence within the MIS world. A number of individuals were handed a scripted series of instructions to follow at a certain time, and performance results were measured at the conclusion of the test. It is still the most accurate method of computer performance prediction. However, several problems arise in the running experiments. In today's world of MIS, the volumes of transactions processed are so high that it is impossible in many cases to obtain a true experimental reading of what might happen after changes are executed. How many individuals would be needed to enter 50,000 online terminal transactions in an hour, or generate a number of hits from varied locations to a web server to duplicate the effort? Furthermore, with today's 24/7/365 expectations, an enterprise may not have the machine, time, and personnel resources with which practical experimentation can be performed. It is possible to simulate operations of several hundred terminals using scripted keystroke files. However, it may not be practical to perform such simulations with thousands of terminals. One of the research documents that the author encountered discussed the prospect of experimentation and, with a touch of humor, conveyed that some experiments are unfeasible due to safety reasons. He cited two prime examples. One was the scenario of a jetliner, carrying a full load of passengers, and then attempting a landing with one engine shut off. The other was the possibility of driving a nuclear reactor to the point of critical mass so those researchers could definitively prove where the point actually occurred! 7 It is conceded that stress testing or overloading of a computer system to determine its points of performance degradation would not carry the possibility of disaster that these previous experiments would carry, but one would wish to avoid them nonetheless. Another modeling technique in use today, and gaining popularity in some areas of computing is simulation. The following quotation provides a down-to-earth explanation as to how simulation models work: "The simulation model describes the operation of the system in terms of individual events of the individual elements in the system. The interrelationships among the elements are also built into the model. Then the model allows the computing device to capture the effect of the elements' action on each other as a dynamic process."(kobayashi, 1981) 8 In short, this is stating that a simulation model describes workloads, and the different components of each, as well as the results of the continuous interaction of the different components of the computer system as time proceeds. Several factors make it difficult to use simulation models for larger systems that contain multiple workloads and devices. The most notable reasons are that there are too many variances in behavior of different devices and workloads which compose the workloads over a period of time, and that the arrival of transactions used as input to the simulation will probably have an uneven distribution. This leads us to begin consideration of the less-complex, but highly effective analytical modeling technique. It must also be noted that for several years, simulation modeling has been a proven, effective methodology used in less complex analyses such as network traffic modeling and prediction. Because network traffic generally has fewer variances in its composition, and packets
6 generally do not interact with each other, simulation techniques can be applied in a practical fashion. There have also been some other applications and hybrid techniques developed through the years; one is called Simalytic Modeling, and it combines both analytical and simulation techniques in the same model. An excellent paper (Norton) on this hybrid methodology is noted in the recommended reading section below. Future of Analytical Modeling Analytical models, or products using analytical analysis and queueing theory, and the tools to create and analyze them and to report and predict performance will continue to enjoy widespread use. In large scale computer systems running applications that contain a high degree of variation of activity, it will remain a highly practical method of analysis because of its relative simplicity and practicality. New technologies have emerged that will force changes in methods of model construction and "whatif" exercises. Internal and operational architecture changes in the mainframe arena will lead to a complete revision of the modeling paradigm, and analytical modeling should continue to service the mainframe realm. Actual stress test experiments will likely not be as prevalent as they were in the past for larger interactive applications, simply because of the large scale efforts required to plan for them and the human and system resources required to execute them. However, it will certainly be used to benchmark hardware, vendor software, and even batch cycle testing. Simulation has been used for many years to provide a more detailed analysis of systems with workload components that contain limited variability. Simulation has and will continue to come into play in the world of the Internet. With the rise of E- commerce, simulation modeling appears to be a viable method for modeling web-based, multi-platform, and network applications. approaches for effective capacity planning. The availability of statistical data, the platforms used for processing, and the objectives and complexities of studies can dictate the methodology to be used for capacity planning. References (1) Buzen, Dr. Jeffrey P., A Simple Model of Transaction Processing, CMG Proceedings, (2),(3),(4),(5) OS/390, RMF, SMF, and IMS DC Monitor are trademarks of IBM Corporation, White Plains, NY. (6) IMF is a trademark of BMC Software, Inc., Houston, TX. (7) Extracted from an Internet WWW home page ( which is an extract from the text of Simulation, by J. Skelnar, University of Malta, (8) Kobayashi, Hisashi., Modeling and Analysis: An Introduction to System Performance Evaluation Methodology. The Systems Programming Series. Reading, MA: Addison-Wesley Publishing Company, (quote attributed by Norton, below) Recommended reading in addition to the above Simalytic Enterprise Modeling - The Best of Both Worlds, Norton, Tim R., Doctoral Candidate, Colorado Technical University CMG Proceedings, 1996 (and many other works by Norton found in CMG Proceedings through the years) Using Analytical Modeling to Ensure Client/Server Application Performance, Leganza, Gene, Cayenne Systems, CMG Proceedings, (and other works by Leganza in CMG Proceedings that deal with stress testing). Conclusion This paper has touched upon several high-level areas of capacity planning and specifically, use of an analytical model as a primary tool. While analytical modeling is but one method in use today, different platforms and applications may require other
SIMALYTIC MODELING: A HYBRID TECHNIQUE FOR CLIENT/SERVER CAPACITY PLANNING
SIMALYTIC MODELING: A HYBRID TECHNIQUE FOR CLIENT/SERVER CAPACITY PLANNING Tim R. Norton, Doctoral Candidate Department of Computer Science Colorado Technical University Colorado Springs, CO 80907 tim.norton@simalytic.com
More informationLoad DynamiX Storage Performance Validation: Fundamental to your Change Management Process
Load DynamiX Storage Performance Validation: Fundamental to your Change Management Process By Claude Bouffard Director SSG-NOW Labs, Senior Analyst Deni Connor, Founding Analyst SSG-NOW February 2015 L
More informationCA MICS Resource Management r12.7
PRODUCT SHEET agility made possible CA MICS Resource Management r12.7 CA MICS Resource Management (CA MICS) is a comprehensive IT resource utilization management system designed to fulfill the information
More informationCase Study I: A Database Service
Case Study I: A Database Service Prof. Daniel A. Menascé Department of Computer Science George Mason University www.cs.gmu.edu/faculty/menasce.html 1 Copyright Notice Most of the figures in this set of
More informationTaking the First Steps in. Web Load Testing. Telerik
Taking the First Steps in Web Load Testing Telerik An Introduction Software load testing is generally understood to consist of exercising an application with multiple users to determine its behavior characteristics.
More informationCapacity Management for Oracle Database Machine Exadata v2
Capacity Management for Oracle Database Machine Exadata v2 Dr. Boris Zibitsker, BEZ Systems NOCOUG 21 Boris Zibitsker Predictive Analytics for IT 1 About Author Dr. Boris Zibitsker, Chairman, CTO, BEZ
More informationThe Importance of Software License Server Monitoring
The Importance of Software License Server Monitoring NetworkComputer How Shorter Running Jobs Can Help In Optimizing Your Resource Utilization White Paper Introduction Semiconductor companies typically
More informationNalini Elkins' TCP/IP Performance Management, Security, Tuning, and Troubleshooting on z/os
Nalini Elkins' TCP/IP Performance Management, Security, Tuning, and Troubleshooting on z/os Do you have the responsibility for TCP/IP performance management? Are you unsure of where to start? Or do you
More informationBridgeWays Management Pack for VMware ESX
Bridgeways White Paper: Management Pack for VMware ESX BridgeWays Management Pack for VMware ESX Ensuring smooth virtual operations while maximizing your ROI. Published: July 2009 For the latest information,
More informationLoad Testing Analysis Services Gerhard Brückl
Load Testing Analysis Services Gerhard Brückl About Me Gerhard Brückl Working with Microsoft BI since 2006 Mainly focused on Analytics and Reporting Analysis Services / Reporting Services Power BI / O365
More informationWindows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
More informationAdvanced Remote Monitoring: Managing Today s Pace of Change
Advanced Remote Monitoring: Managing Today s Pace of Change RMM solutions enable an organization to reduce the risk of system outages and guard against the impact of unauthorized or malicious uses of technology,
More informationWhite paper: Unlocking the potential of load testing to maximise ROI and reduce risk.
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk. Executive Summary Load testing can be used in a range of business scenarios to deliver numerous benefits. At its core,
More informationCoping with the Data Explosion
Paper 176-28 Future Trends and New Developments in Data Management Jim Lee, Princeton Softech, Princeton, NJ Success in today s customer-driven and highly competitive business environment depends on your
More informationHow To Write A Successful Automation Project
ch01.fm Page 1 Thursday, November 4, 1999 12:19 PM Chapter 1 Lights Out Exposed Planning and executing a successful automation project begins by developing realistic expectations for the purpose and scope
More informationWeb Server Software Architectures
Web Server Software Architectures Author: Daniel A. Menascé Presenter: Noshaba Bakht Web Site performance and scalability 1.workload characteristics. 2.security mechanisms. 3. Web cluster architectures.
More informationBROCADE PERFORMANCE MANAGEMENT SOLUTIONS
Data Sheet BROCADE PERFORMANCE MANAGEMENT SOLUTIONS SOLUTIONS Managing and Optimizing the Performance of Mainframe Storage Environments HIGHLIGHTs Manage and optimize mainframe storage performance, while
More informationEverything you need to know about flash storage performance
Everything you need to know about flash storage performance The unique characteristics of flash make performance validation testing immensely challenging and critically important; follow these best practices
More informationAnalyzing IBM i Performance Metrics
WHITE PAPER Analyzing IBM i Performance Metrics The IBM i operating system is very good at supplying system administrators with built-in tools for security, database management, auditing, and journaling.
More informationReduce IT Costs by Simplifying and Improving Data Center Operations Management
Thought Leadership white paper Reduce IT Costs by Simplifying and Improving Data Center Operations Management By John McKenny, Vice President of Worldwide Marketing for Mainframe Service Management, BMC
More informationAn objective comparison test of workload management systems
An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: sfiligoi@fnal.gov Abstract. The Grid
More informationRemoving Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
More informationDemystifying Virtualization for Small Businesses Executive Brief
Demystifying Virtualization for Small Businesses White Paper: Demystifying Virtualization for Small Businesses Demystifying Virtualization for Small Businesses Contents Introduction............................................................................................
More informationCapacity Planning Use Case: Mobile SMS How one mobile operator uses BMC Capacity Management to avoid problems with a major revenue stream
SOLUTION WHITE PAPER Capacity Planning Use Case: Mobile SMS How one mobile operator uses BMC Capacity Management to avoid problems with a major revenue stream Table of Contents Introduction...................................................
More informationPerformance Testing. Nov 2011
Performance Testing Nov 2011 Load testing tools have been around for more than a decade but performance testing practices adopted by the IT industry are still far from maturity. There are still severe
More informationVDI FIT and VDI UX: Composite Metrics Track Good, Fair, Poor Desktop Performance
VDI FIT and VDI UX: Composite Metrics Track Good, Fair, Poor Desktop Performance Key indicators and classification capabilities in Stratusphere FIT and Stratusphere UX Whitepaper INTRODUCTION This whitepaper
More informationMEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect ashutosh_shinde@hotmail.com Validating if the workload generated by the load generating tools is applied
More informationBMC Control-M Workload Automation
solution overview BMC Control-M Workload Automation Accelerating Delivery of Digital Services with Workload Management Table of Contents 1 SUMMARY 2 FASTER AND CHEAPER DYNAMIC WORKLOAD MANAGEMENT Minimize
More informationGuideline for stresstest Page 1 of 6. Stress test
Guideline for stresstest Page 1 of 6 Stress test Objective: Show unacceptable problems with high parallel load. Crash, wrong processing, slow processing. Test Procedure: Run test cases with maximum number
More informationPATROL From a Database Administrator s Perspective
PATROL From a Database Administrator s Perspective September 28, 2001 Author: Cindy Bean Senior Software Consultant BMC Software, Inc. 3/4/02 2 Table of Contents Introduction 5 Database Administrator Tasks
More informationPredictive Analytics And IT Service Management
IBM Software Group Session 11479 Wednesday, August 8 th 1:30 2:30 PM Predictive Analytics And IT Service Management Ed Woods Consulting IT Specialist IBM Corporation Agenda What is Predictive Analytics?
More informationPerformance Testing IBM MQSeries* Infrastructures
Performance Testing IBM * Infrastructures MQTester TM for LoadRunner from CommerceQuest Inc. 2001 CommerceQuest Inc. All rights reserved. The information contained herein is the proprietary property of
More informationEnterprise Application Performance Management: An End-to-End Perspective
SETLabs Briefings VOL 4 NO 2 Oct - Dec 2006 Enterprise Application Performance Management: An End-to-End Perspective By Vishy Narayan With rapidly evolving technology, continued improvements in performance
More informationIBM RATIONAL PERFORMANCE TESTER
IBM RATIONAL PERFORMANCE TESTER Today, a major portion of newly developed enterprise applications is based on Internet connectivity of a geographically distributed work force that all need on-line access
More informationCase Study - I. Industry: Social Networking Website Technology : J2EE AJAX, Spring, MySQL, Weblogic, Windows Server 2008.
Case Study - I Industry: Social Networking Website Technology : J2EE AJAX, Spring, MySQL, Weblogic, Windows Server 2008 Challenges The scalability of the database servers to execute batch processes under
More informationCA MICS Resource Management r12.6
PRODUCT SHEET CA MICS Resource Management CA MICS Resource Management r12.6 CA MICS Resource Management (CA MICS) is a comprehensive IT resource utilization management system designed to fulfill the information
More informationImproving Compute Farm Throughput in Electronic Design Automation (EDA) Solutions
Improving Compute Farm Throughput in Electronic Design Automation (EDA) Solutions System Throughput in the EDA Design Flow Abstract Functional verification of Silicon on Chip (SoC) designs can contribute
More informationSystem Requirements Table of contents
Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5
More informationProduct Review: James F. Koopmann Pine Horse, Inc. Quest Software s Foglight Performance Analysis for Oracle
Product Review: James F. Koopmann Pine Horse, Inc. Quest Software s Foglight Performance Analysis for Oracle Introduction I ve always been interested and intrigued by the processes DBAs use to monitor
More informationCase Study In the last 80 years, Nationwide has grown from a small mutual auto
"The creation of a private cloud built around the z196 servers supports our business transformation goals by enabling the rapid, seamless deployment of new computing resources to meet emerging requirements."
More informationCHAPTER 3 CALL CENTER QUEUING MODEL WITH LOGNORMAL SERVICE TIME DISTRIBUTION
31 CHAPTER 3 CALL CENTER QUEUING MODEL WITH LOGNORMAL SERVICE TIME DISTRIBUTION 3.1 INTRODUCTION In this chapter, construction of queuing model with non-exponential service time distribution, performance
More informationCA Workload Automation Agents for Mainframe-Hosted Implementations
PRODUCT SHEET CA Workload Automation Agents CA Workload Automation Agents for Mainframe-Hosted Operating Systems, ERP, Database, Application Services and Web Services CA Workload Automation Agents are
More informationLoad Testing and Monitoring Web Applications in a Windows Environment
OpenDemand Systems, Inc. Load Testing and Monitoring Web Applications in a Windows Environment Introduction An often overlooked step in the development and deployment of Web applications on the Windows
More informationSTATISTICA Solutions for Financial Risk Management Management and Validated Compliance Solutions for the Banking Industry (Basel II)
STATISTICA Solutions for Financial Risk Management Management and Validated Compliance Solutions for the Banking Industry (Basel II) With the New Basel Capital Accord of 2001 (BASEL II) the banking industry
More informationMicrosoft Dynamics NAV 2013 R2 Sizing Guidelines for On-Premises Single Tenant Deployments
Microsoft Dynamics NAV 2013 R2 Sizing Guidelines for On-Premises Single Tenant Deployments July 2014 White Paper Page 1 Contents 3 Sizing Recommendations Summary 3 Workloads used in the tests 3 Transactional
More informationManage your IT Resources with IBM Capacity Management Analytics (CMA)
Manage your IT Resources with IBM Capacity Management Analytics (CMA) New England Users Group (NEDB2UG) Meeting Sturbridge, Massachusetts, USA, http://www.nedb2ug.org November 19, 2015 Milan Babiak Technical
More informationCA Scheduler Job Management r11
PRODUCT SHEET CA Scheduler Job Management CA Scheduler Job Management r11 CA Scheduler Job Management r11 (CA Scheduler JM), part of the Job Management solution from CA Technologies, is a premier z/oscentric
More informationVirtualization in Healthcare: Less Can Be More
HEALTH INDUSTRY INSIGHTS EXECUTIVE BRIEF Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.935.4445 F.508.988.7881 www.healthindustry-insights.com Virtualization in Healthcare: Less Can
More informationNewsletter 4/2013 Oktober 2013. www.soug.ch
SWISS ORACLE US ER GRO UP www.soug.ch Newsletter 4/2013 Oktober 2013 Oracle 12c Consolidation Planer Data Redaction & Transparent Sensitive Data Protection Oracle Forms Migration Oracle 12c IDENTITY table
More informationUsing Simulation Modeling to Predict Scalability of an E-commerce Website
Using Simulation Modeling to Predict Scalability of an E-commerce Website Rebeca Sandino Ronald Giachetti Department of Industrial and Systems Engineering Florida International University Miami, FL 33174
More informationParallels Virtuozzo Containers
Parallels Virtuozzo Containers White Paper Top Ten Considerations For Choosing A Server Virtualization Technology www.parallels.com Version 1.0 Table of Contents Introduction... 3 Technology Overview...
More informationPLA 7 WAYS TO USE LOG DATA FOR PROACTIVE PERFORMANCE MONITORING. [ WhitePaper ]
[ WhitePaper ] PLA 7 WAYS TO USE LOG DATA FOR PROACTIVE PERFORMANCE MONITORING. Over the past decade, the value of log data for monitoring and diagnosing complex networks has become increasingly obvious.
More information11.1 inspectit. 11.1. inspectit
11.1. inspectit Figure 11.1. Overview on the inspectit components [Siegl and Bouillet 2011] 11.1 inspectit The inspectit monitoring tool (website: http://www.inspectit.eu/) has been developed by NovaTec.
More informationRecommendations for Performance Benchmarking
Recommendations for Performance Benchmarking Shikhar Puri Abstract Performance benchmarking of applications is increasingly becoming essential before deployment. This paper covers recommendations and best
More informationWhat Is Specific in Load Testing?
What Is Specific in Load Testing? Testing of multi-user applications under realistic and stress loads is really the only way to ensure appropriate performance and reliability in production. Load testing
More informationDeveloping a Load Testing Strategy
Developing a Load Testing Strategy Michele Ruel St.George Bank CMGA 2005 Page 1 Overview... 3 What is load testing?... 4 Scalability Test... 4 Sustainability/Soak Test... 4 Comparison Test... 4 Worst Case...
More informationDMS Performance Tuning Guide for SQL Server
DMS Performance Tuning Guide for SQL Server Rev: February 13, 2014 Sitecore CMS 6.5 DMS Performance Tuning Guide for SQL Server A system administrator's guide to optimizing the performance of Sitecore
More informationMEASURING PRE-PRODUCTION APPLICATION, SYSTEM AND PERFORMANCE VOLUME STRESS TESTING WITH TEAMQUEST
WHITE PAPER IT Knowledge Exchange Series MEASURING PRE-PRODUCTION APPLICATION, SYSTEM AND PERFORMANCE VOLUME STRESS TESTING WITH TEAMQUEST A white paper on how to enhance your testing discipline Contents
More informationEMC XtremSF: Delivering Next Generation Performance for Oracle Database
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
More informationCloud Computing Capacity Planning. Maximizing Cloud Value. Authors: Jose Vargas, Clint Sherwood. Organization: IBM Cloud Labs
Cloud Computing Capacity Planning Authors: Jose Vargas, Clint Sherwood Organization: IBM Cloud Labs Web address: ibm.com/websphere/developer/zones/hipods Date: 3 November 2010 Status: Version 1.0 Abstract:
More informationPredictive Intelligence: Identify Future Problems and Prevent Them from Happening BEST PRACTICES WHITE PAPER
Predictive Intelligence: Identify Future Problems and Prevent Them from Happening BEST PRACTICES WHITE PAPER Table of Contents Introduction...1 Business Challenge...1 A Solution: Predictive Intelligence...1
More informationBenchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
More informationApplication Performance Testing Basics
Application Performance Testing Basics ABSTRACT Todays the web is playing a critical role in all the business domains such as entertainment, finance, healthcare etc. It is much important to ensure hassle-free
More informationPerformance Engineering and Global Software Development
Engineering and Global Software Development Sohel Aziz, Gaurav Caprihan, Kingshuk Dasgupta, and Stephen Lane The need to achieve system performance in a way that reduces risk and improves cost-effectiveness
More informationEnterprise Report Management CA View, CA Deliver, CA Dispatch, CA Bundl, CA Spool, CA Output Management Web Viewer
PRODUCT FAMILY SHEET Enterprise Report Management Enterprise Report Management CA View, CA Deliver, CA Dispatch, CA Bundl, CA Spool, CA Output Management Web Viewer CA Technologies provides leading software
More informationCharacterizing Performance of Enterprise Pipeline SCADA Systems
Characterizing Performance of Enterprise Pipeline SCADA Systems By Kevin Mackie, Schneider Electric August 2014, Vol. 241, No. 8 A SCADA control center. There is a trend in Enterprise SCADA toward larger
More informationWindows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
More informationFilling In The IT Systems Management White Space Gap
IBM Software Group Filling In The IT Systems Management White Space Gap Ed Woods - IBM Corporation Session #16331 Tuesday, March 3rd: 1:45 PM - 2:45 PM 2015 IBM Corporation Agenda Introduction Defining
More informationNew Relic & JMeter - Perfect Performance Testing
TUTORIAL New Relic & JMeter - Perfect Performance Testing by David Sale Contents Introduction 3 Demo Application 4 Hooking Into New Relic 4 What Is JMeter? 6 Installation and Usage 6 Analysis In New Relic
More informationThe Association of System Performance Professionals
The Association of System Performance Professionals The Computer Measurement Group, commonly called CMG, is a not for profit, worldwide organization of data processing professionals committed to the measurement
More informationReducing the Cost and Complexity of Business Continuity and Disaster Recovery for Email
Reducing the Cost and Complexity of Business Continuity and Disaster Recovery for Email Harnessing the Power of Virtualization with an Integrated Solution Based on VMware vsphere and VMware Zimbra WHITE
More informationDIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
More informationIBM FlashSystem and Atlantis ILIO
IBM FlashSystem and Atlantis ILIO Cost-effective, high performance, and scalable VDI Highlights Lower-than-PC cost Better-than-PC user experience Lower project risks Fast provisioning and better management
More informationMission-Critical Java. An Oracle White Paper Updated October 2008
Mission-Critical Java An Oracle White Paper Updated October 2008 Mission-Critical Java The Oracle JRockit family of products is a comprehensive portfolio of Java runtime solutions that leverages the base
More informationAgile Business Intelligence Data Lake Architecture
Agile Business Intelligence Data Lake Architecture TABLE OF CONTENTS Introduction... 2 Data Lake Architecture... 2 Step 1 Extract From Source Data... 5 Step 2 Register And Catalogue Data Sets... 5 Step
More informationThe Importance of Performance Assurance For E-Commerce Systems
WHY WORRY ABOUT PERFORMANCE IN E-COMMERCE SOLUTIONS? Dr. Ed Upchurch & Dr. John Murphy Abstract This paper will discuss the evolution of computer systems, and will show that while the system performance
More informationConsequences of Poorly Performing Software Systems
Consequences of Poorly Performing Software Systems COLLABORATIVE WHITEPAPER SERIES Poorly performing software systems can have significant consequences to an organization, well beyond the costs of fixing
More informationVirtual Desktops Security Test Report
Virtual Desktops Security Test Report A test commissioned by Kaspersky Lab and performed by AV-TEST GmbH Date of the report: May 19 th, 214 Executive Summary AV-TEST performed a comparative review (January
More informationW H I T E P A P E R. Reducing Server Total Cost of Ownership with VMware Virtualization Software
W H I T E P A P E R Reducing Server Total Cost of Ownership with VMware Virtualization Software Table of Contents Executive Summary............................................................ 3 Why is
More informationPerformance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
More informationIT Survey Results: Mainframe Is an Engine of Business Growth and a Reason for Optimism
Thought Leadership white paper IT Survey Results: Mainframe Is an Engine of Business Growth and a Reason for Optimism By Mike Moser, Product Management Director and Program Executive, BMC Software Table
More informationPROGRESS OPENEDGE PRO2 DATA REPLICATION
WHITEPAPER PROGRESS OPENEDGE PRO2 DATA REPLICATION DATA OpenEdge OpenEdge Pro2 Data Server DATA Oracle, SQL Server, OpenEdge Contents Introduction... 2 Executive Summary... 3 The Pro2 Solution... 4 Pro2
More informationEnergy Efficient MapReduce
Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing
More informationHigh Velocity Analytics Take the Customer Experience to the Next Level
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com 212.367.7400 High Velocity Analytics Take the Customer Experience to the Next Level IBM FlashSystem and IBM Tealeaf Printed in the United
More informationPlatform as a Service: The IBM point of view
Platform as a Service: The IBM point of view Don Boulia Vice President Strategy, IBM and Private Cloud Contents 1 Defining Platform as a Service 2 The IBM view of PaaS 6 IBM offerings 7 Summary 7 For more
More informationAudit TM. The Security Auditing Component of. Out-of-the-Box
Audit TM The Security Auditing Component of Out-of-the-Box This guide is intended to provide a quick reference and tutorial to the principal features of Audit. Please refer to the User Manual for more
More informationUptime Infrastructure Monitor Whitepaper THE TRUTH ABOUT AGENT VS. AGENTLESS MONITORING. A Short Guide to Choosing the Right Monitoring Solution.
Uptime Infrastructure Monitor Whitepaper THE TRUTH ABOUT AGENT VS. AGENTLESS MONITORING A Short Guide to Choosing the Right Monitoring Solution. When selecting an enterprise-level IT monitoring solution,
More informationHow To Handle Big Data With A Data Scientist
III Big Data Technologies Today, new technologies make it possible to realize value from Big Data. Big data technologies can replace highly customized, expensive legacy systems with a standard solution
More informationOPTIMIZING PERFORMANCE IN AMAZON EC2 INTRODUCTION: LEVERAGING THE PUBLIC CLOUD OPPORTUNITY WITH AMAZON EC2. www.boundary.com
OPTIMIZING PERFORMANCE IN AMAZON EC2 While the business decision to migrate to Amazon public cloud services can be an easy one, tracking and managing performance in these environments isn t so clear cut.
More informationFive Reasons to Take Your Virtualization Environment to a New Level
Five Reasons to Take Your Virtualization Environment to a New Level Study finds the addition of robust management capabilities drives 20 to 40 percent increases in key performance metrics WHITE PAPER Table
More informationPerformance Prediction, Sizing and Capacity Planning for Distributed E-Commerce Applications
Performance Prediction, Sizing and Capacity Planning for Distributed E-Commerce Applications by Samuel D. Kounev (skounev@ito.tu-darmstadt.de) Information Technology Transfer Office Abstract Modern e-commerce
More informationBest practices for data migration.
IBM Global Technology Services June 2007 Best practices for data migration. Methodologies for planning, designing, migrating and validating data migration Page 2 Contents 2 Executive summary 4 Introduction
More informationTableau Server 7.0 scalability
Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different
More informationBIGDATA GREENPLUM DBA INTRODUCTION COURSE OBJECTIVES COURSE SUMMARY HIGHLIGHTS OF GREENPLUM DBA AT IQ TECH
BIGDATA GREENPLUM DBA Meta-data: Outrun your competition with advanced knowledge in the area of BigData with IQ Technology s online training course on Greenplum DBA. A state-of-the-art course that is delivered
More informationHow To Improve Performance
Engineering and Global Software Development Sohel Aziz, Gaurav Caprihan, Kingshuk Dasgupta, and Stephen Lane Abstract The need to achieve system performance in a way that reduces risk and improves cost-effectiveness
More informationIBM and ACI Worldwide Providing comprehensive, end-to-end electronic payment solutions for retail banking
IBM and ACI Worldwide Providing comprehensive, end-to-end electronic payment solutions for retail banking IBM and ACI offer unparalleled expertise in designing and optimizing payment systems As leading
More informationDirections for VMware Ready Testing for Application Software
Directions for VMware Ready Testing for Application Software Introduction To be awarded the VMware ready logo for your product requires a modest amount of engineering work, assuming that the pre-requisites
More informationHYBRID APPLICATION PERFORMANCE TESTING
HYBRID APPLICATION PERFORMANCE TESTING Managing the performance of today s mobile, web and cloud applications requires a proactive, multi-faceted approach to performance testing. This paper is sponsored
More informationBMC Mainframe Solutions. Optimize the performance, availability and cost of complex z/os environments
BMC Mainframe Solutions Optimize the performance, availability and cost of complex z/os environments If you depend on your mainframe, you can rely on BMC Sof tware. Yesterday. Today. Tomorrow. You can
More informationSERVER VIRTUALIZATION IN MANUFACTURING
SERVER VIRTUALIZATION IN MANUFACTURING White Paper 2 Do s and Don ts for Your Most Critical Manufacturing Systems Abstract While the benefits of server virtualization at the corporate data center are receiving
More information