We will discuss Capping Capacity in the traditional sense as it is done today.
|
|
- Simon Norman
- 8 years ago
- Views:
Transcription
1
2 We have a very focused agenda so let s get right to it. The theme of our session today is automation and how that automation is delivered with ThruPut Manager Automation Edition or TM AE. We will discuss Capping Capacity in the traditional sense as it is done today. Then we will talk about Capping Demand; an innovative technique we ve developed with TM AE to directly reduce monthly software costs while maintaining performance for your key applications. Finally, we ll talk about Automating Workload Management. A necessary step to enable Capping Demand.
3 Capacity Management and Workload Management are key techniques that have been in the datacenter for years. Our position is that these functions can deliver huge benefits when automated. While anything can be accomplished with an army of people, machines and money, we believe that automation is the most effective and efficient technique to run a datacenter today. You will see these charts again later in the presentation. The chart on the top left illustrates how Automated Capacity Management can lower software costs. The chart on the bottom right shows how Automated Workload Management can improve performance by managing utilization and balancing workloads.
4 Why do datacenters cap anyway? Deciding to use capping means making a conscious decision to lower your capacity levels, thus increasing your utilization levels. It seems counter intuitive to performance.
5 Of course, this is why. Software is the largest operating expense in the datacenter. While sub capacity pricing allows billing based on peak consumption, running without caps means a spike in the Rolling Four Hour Average can lead to unacceptably high software bills. Caps provide the ability to control this cost, but with some [ high ] risk to performance.
6 Managing Your z/os Software Costs Caps are also referred to as Capacity Limits. As mentioned, their primary purpose is to control monthly software charges by limiting the Rolling Four Hour Average. Hard Caps limit application CPU consumption to the cap level at all times. This means the Rolling Four Hour Average can never exceed the cap limit. Soft Caps only limit application consumption when the Rolling Four Hour Average exceeds the cap level
7 This is what a hard cap looks like. Neither the application demand nor the Rolling Four Hour Average is ever allowed to exceed the set limit. It s just like having a smaller machine. This ensures very consistent software bills, but can be rather unforgiving to application performance.
8 This is what soft caps look like. The application demand is freely allowed to exceed the cap limit because the Rolling Four Hour average, as shown with the red line, does not exceed the limit. As you can see, this option provides much more flexibility to application performance.
9 When the Rolling Four Hour Average does exceed the cap limit, applications are immediately affected. When this occurs, the LPAR (or group of LPARs) is capped by PR/SM. CPU resources are only provided to the LPAR at a rate that does not exceed the capacity limit regardless of the actual application demand. The effect is very much as if the applications are suddenly running on a smaller machine. Further, the capping will remain in place until the Rolling Four Hour Average is back below the capacity limit. In this case the effect remained for 3 hours.
10 So what do we do when cap limits are exceeded? Typical & traditional, manual methods are to monitor and react to changes as best we can. We cancel jobs, adjust priorities and do our best to help our applications continue to perform in a more constrained environment. It requires a lot of expert staff and still can t be performed at machine speeds or volumes.
11 Availability is usually job one in a mainframe environment. So when limits are hit, often we simply resort to raising the cap. While this provides a positive effect on our applications, it provides a negative effect on our software bill. There is really no point in putting the cap back down as the software bill is based on the highest peak.
12 Recalling our description of hitting the wall, when the Rolling Four Hour Average hits the capacity limit, the entire LPAR is capped, potentially affecting all applications on the LPAR. Rather than allow this to occur, TM AE allows you to only cap specified batch workloads. Further, TM AE takes a soft hammer approach, by gradually reducing or deferring the CPU consumption of the batch that you designate before you hit the wall. Many shops are under the mistaken impression that batch does not impact their peaks because it runs at lower priority and it runs at night. The facts don t bear that argument out. Firstly, keep in mind that all workloads contribute to the Rolling Four Hour Average regardless of priority. A CPU second is a CPU second. Secondly, consider the following chart.
13 This chart was generated from actual customer data and only reflects the day shift. While online is certainly the dominant contributor, note that batch makes up a significant portion of the Rolling Four Hour Average. Even a modest reduction in batch consumption during these peaks can yield significant software savings while maintaining performance of your key workloads. Now, we ll go into a little more detail about how TM AE allows you to lower your software costs by capping demand.
14 In order to cap demand at the right times, TM AE has to understand the environment and be aware of any changes that may occur. Of course TM AE determines if there are any Defined Capacity or LPAR Group Limits set by the installation, and will detect any changes made to these limits. It also tracks the current rolling 4 hour average CPU usage for each LPAR and the current CPU demand. It s important to keep current demand in mind when making decisions on whether or not certain workloads should be increased. In order to have a full picture, TM AE also tracks the overall CPU consumption of the CEC and each LPAR in the CEC. High CPU consumption by one or more LPARs can affect CPU availability for other LPARs. TM AE monitors batch workload performance to avoid overloading leading to poor overall performance and delays. This is particularly important when capping demand, so that the workload being capped still performs as well as possible given that it is being constrained.
15 TM AE caps the batch workloads chosen by the installation in three phases. First of all, as the R4HA increases and approaches the limits set by Defined Capacity or LPAR Group Limit, TM AE automatically starts restricting the CPU consumption by these workloads. It does this in 5 gradual steps. The idea is to slowly shrink usage and avoid hitting the limit with very high CPU consumption we usually refer to this as hitting the wall. When you become soft capped at high rates of CPU utilization, this sudden drop in CPU availability can significantly affect performance. When this happens, this usually causes installations to immediately increase their limits and their software bills. By taking action in stages before the limit is reached, TM AE makes capping manageable and eliminates the negative performance effects. Then, while soft capping is occurring, TM AE continues to constrain the lower priority batch workloads to the maximum extent specified by the installation, reducing the overall CPU demand in each affected LPAR. This leaves more cycles available for high priority workloads such as online, even if the high and low priority work are on different LPARs. Once the peak passes, the LPAR is no longer being capped and overall CPU consumption begins to come down. TM AE starts to smoothly remove the constraints on the affected batch workloads, gradually allowing more access to CPU, reversing the 5 steps that I just talked about. The goal is to automatically get as much of the deferred workload running as quickly as possible as long as the rolling 4 hour average is not increased a difficult task without automation. Gradually TM AE starts to run the deferred jobs. As long as the R4HA does not increase, the constraints will be relaxed further and more of the deferred batch workload will be allowed to run. TM AE also makes sure that the deferred work is selected at a rate that does not overload the LPAR. TM AE makes it possible to live not only with the caps you may currently have implemented, but to reduce them further while still protecting the performance of your high priority applications. Lower cap values translate directly into lower software bills every month going forward. Here s a couple of examples to show how this all works.
16 Here s an example of an LPAR Group with two LPARs, LPAR1 and LPAR2, with a mix of online and batch load. On the left, before using TM AE we see that on LPAR 1, online and batch are both contributing 600 MSUs/hr to the peak R4HA. On LPAR2, online contributes 700 MSUs/hr and batch 300. The total peak R4HA is 2200 MSUs/hr. After implementing TM AE, the installation decides that 25% of their batch should be eligible to be deferred. The results? The online usage remains the same, while the batch is smaller for a new total peak R4HA of 1975 MSUs/hr, a saving of 225 MSUs/hr. The LPAR Group limit can be reduced by 225 MSUs/hr, without affecting online performance.
17 In this second example, we have a different configuration: an LPAR Group with two LPARs, 3 and 4, one of which (LPAR 3) has mostly online and a small amount of high priority batch that must run on that LPAR, and another (LPAR 4) that runs entirely batch load. On the left, before TM AE, the peak R4HA for this LPAR Group was 1850 MSUs/hr. This installation wants to reduce their peak monthly R4HA but they also have reached a point where their online applications are going to need more CPU to handle an expected increase in transaction volume due to the introduction of a new application. These are usually conflicting goals. The installation identifies 300 MSUs of batch workload on LPAR 4 that can be deferred during peak times. They decide to reduce their LPAR Group limit by 200 MSUs/hr and let the online work on LPAR 3 consume an additional 100 MSUs/hr. Note that the batch on LPAR 3 is all high priority so they left it alone. Their online applications got the capacity boost they needed, and they still reduced their peak monthly rolling 4 hour average.
18 Here s a graph created from actual data from a large TM AE installation. On it you can see the R4HA, the CPU utilization and the horizontal line is the limit. In this case the installation is using LPAR Group Limits. Notice how the R4HA line gradually flattens until it just barely hits the LPAR Group Limit. This is TM AE in action, reducing the demand and the growth in the R4HA so that effects of capping are minimized. This customer was able to reduce their limits and realize 7 figure annual savings. Let s look at typical savings.
19 This chart shows the expected savings for three different installations. For each of these, the contribution of all batch workloads to the overall peak R4HA was determined and then assuming that 25% of that is lower priority and deferrable, the potential savings were calculated. For this we used a rate of $300/MSU/hr, which is typical for a large installation with the usual IBM software set. Smaller installations may not save quite as many MSUs, but in some cases their incremental rate is $1400 per MSU/hr, so the savings can still be substantial. Some installations current annual savings exceed the $1.3M shown on the chart.
20 It s clear that Capping Demand is the key to lower software costs while protecting key applications. To enable capping the demand of our applications, we first need to take control of our applications with automation. Changes in system utilization and workload demand happen at machine speeds, we need to anticipate and react at machine speeds as well. This means dynamically tracking and managing utilization and automatically balancing workloads across the available resources.
21 Here we have an over utilized highway. The design doesn t prevent more cars from coming on the road even as the average speed approaches zero. The result is many cars on the road, but no one getting to their destination within the expected time.
22 The same principles hold true in a computer system. If additional workloads are continually added to the system even as utilization reaches critical levels, everything waits longer. Some users may have a comfortable feeling since they see their job has started, but like the cars on the highway, the job is not going anywhere very fast. You can see from this queuing graph that at very high levels of utilization the total elapsed time becomes 10 or 20 times longer. The CPU is working just as fast but the wait time to access the CPU increases exponentially. As utilization approaches 100% the wait time increases to intolerable levels, just like gridlock on the highway. The key to peak throughput is to constantly manage the workload distribution against the rapidly changing utilization to maintain just the right amount of load. Now we ll explain how TM AE accomplishes this automatically.
23 To avoid overloading, TM AE only adds batch load when and where it makes sense. LPARs must have available capacity. This means either the LPARs are not using up their CPU entitlement by weight or there are still unused CPU cycles on the CEC of which the LPAR is able to take advantage. The Service Class in which a job will run must be performing well to avoid unproductive overloading. Because everything works better when you avoid overloading. It is not important when a batch job starts, it s when it ends that matters. This is even truer with production applications where one or more jobs may be dependent on the completion of another. Jobs end sooner in a TM AE managed environment. We did a benchmark that shows this in action.
24 We ran over 1000 jobs being submitted over several hours. These jobs had a mix of CPU and I/O load and were run in WLM batch initiators and under TM AE. There were no job dependencies. Of course, the two environments were identical in every way: hardware, software and service class definitions. No other work was running on the CEC. The result? TM AE ran far fewer jobs at once, starting many fewer initiators and completed much more work. Here s the graph of the benchmark results:
25 You can see that the work ran for hours. WLM started 300 initiators while TM AE automatically started only 25 initiators. As you can see from the graph, after over 8 hours, TM AE had completed 200 more jobs. The only reason it didn t get higher is that we stopped submitting work! These results would be even more dramatic with workloads that make use of job dependencies. Many installations use manually controlled JES2 initiators. In that environment you will often find that the machine either is overloaded or underutilized since it s just not possible to be as effective manually as automation that is constantly monitoring the environment and making instant decisions.
26 Sometimes less is more. By avoiding overloading, TM AE gets more batch jobs done, faster. Systems are much more prone to overloading when capped, which is why automation is so important when managing demand in a soft capped environment.
27 It s safe to say this load is imbalanced. As mentioned previously, the other key factor in effective Automated Workload Management is balance.
28 As we demonstrated with utilization, a system running at 100% is spending more time waiting than doing productive work. In the example given here, even though the system at 60% will provide slightly better service than the systems at 80%, the exponential delays experienced by the system at 100% will more than wipe out those gains. The facts are simple: in order to achieve peak performance, workloads must be balanced across the available resources. As with utilization, workload demands change too rapidly to manage manually. This is another automated function of TM AE.
29 In order to balance batch workloads automatically, TM AE makes sure it is aware of the performance and capacity of the entire environment. It tracks the utilization of all LPARs and the required system and resource affinities of all batch workloads. The installation specified business priorities are defined to TM AE so that it always selects the most urgent job. Workload is balanced automatically because TM AE controls the number of initiators on each system and dynamically spreads the workload across all members of the JESplex.
30 TM AE does workload balancing right. Balancing is not simply making sure the same number of jobs are running on each system. Balancing means that each system in the JESplex runs the right amount of batch workload for the current conditions on each system while still respecting any specific system affinity and resource requirements of individual batch jobs. TM AE considers the actual activity on each LPAR and reevaluates CEC, LPAR and Service Class performance every 10 seconds so it can respond to environments that can and do change very rapidly. Only automation can do this. TM AE avoids overloading by rebalancing the batch workload as CPU demand and availability change. Capacity can change too, due to Capacity on Demand, LPAR weight changes, and, of course, our old friend, soft capping, which can cause the available capacity to be suddenly reduced. By balancing batch workload intelligently, TM AE delivers increased throughput with proper use of existing resources.
31 Automation is no longer an option in datacenters today. Capacity and Workload Managers need to harness this technology to control costs while maximizing performance. Automated Workload Management provides the necessary controls to maximize throughput by managing utilization and balance. Automated Capacity Management enables the datacenter to Cap Demand the only way to safely reduce cap levels and lower software costs.
32
DATA CENTER VIRTUALIZATION WHITE PAPER SEPTEMBER 2006
DATA CENTER VIRTUALIZATION WHITE PAPER SEPTEMBER 2006 EXECUTIVE SUMMARY Many enterprise IT departments have attempted to respond to growth by adding servers and storage systems dedicated to specific applications,
More informationAnalyzing IBM i Performance Metrics
WHITE PAPER Analyzing IBM i Performance Metrics The IBM i operating system is very good at supplying system administrators with built-in tools for security, database management, auditing, and journaling.
More informationGoing beyond ITIL : IT Capacity Management with SAS
Going beyond ITIL : IT Capacity Management with SAS Exploiting SAS business intelligence and advanced analytics capabilities to effectively align IT capacity to business demand using a best-practice and
More informationUsing Simulation to Understand and Optimize a Lean Service Process
Using Simulation to Understand and Optimize a Lean Service Process Kumar Venkat Surya Technologies, Inc. 4888 NW Bethany Blvd., Suite K5, #191 Portland, OR 97229 kvenkat@suryatech.com Wayne W. Wakeland
More informationThe Top 20 VMware Performance Metrics You Should Care About
The Top 20 VMware Performance Metrics You Should Care About Why you can t ignore them and how they can help you find and avoid problems. WHITEPAPER BY ALEX ROSEMBLAT Table of Contents Introduction... 3
More informationWindows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
More informationNetwork performance and capacity planning: Techniques for an e-business world
IBM Global Services Network performance and capacity planning: Techniques for an e-business world e-business is about transforming key business processes with Internet technologies. In an e-business world,
More informationWhy Relative Share Does Not Work
Why Relative Share Does Not Work Introduction Velocity Software, Inc March 2010 Rob van der Heij rvdheij @ velocitysoftware.com Installations that run their production and development Linux servers on
More informationManaging Application Performance and Availability in a Virtual Environment
The recognized leader in proven and affordable load balancing and application delivery solutions White Paper Managing Application Performance and Availability in a Virtual Environment by James Puchbauer
More informationSafePeak Case Study: Large Microsoft SharePoint with SafePeak
SafePeak Case Study: Large Microsoft SharePoint with SafePeak The benchmark was conducted in an Enterprise class organization (>2, employees), in the software development business unit, unit that widely
More informationLeveraging Technologies for MLC Software Expense Management
Leveraging Technologies for MLC Software Expense Management Todd Havekost USAA August 16, 2013 Session #13405 Agenda USAA Context Sub-capacity Pricing Overview Tools for Reducing MLC Expense Workload Management
More informationUnderstanding Linux on z/vm Steal Time
Understanding Linux on z/vm Steal Time June 2014 Rob van der Heij rvdheij@velocitysoftware.com Summary Ever since Linux distributions started to report steal time in various tools, it has been causing
More informationTableau Server Scalability Explained
Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted
More informationOptimizing Your Database Performance the Easy Way
Optimizing Your Database Performance the Easy Way by Diane Beeler, Consulting Product Marketing Manager, BMC Software and Igy Rodriguez, Technical Product Manager, BMC Software Customers and managers of
More informationWelcome to today's webinar: How to Transform RMF & SMF into Availability Intelligence
Welcome to today's webinar: How to Transform RMF & SMF into Availability Intelligence The presentation will begin shortly Session Abstract: How to Transform RMF & SMF into Availability Intelligence It
More informationBROCADE PERFORMANCE MANAGEMENT SOLUTIONS
Data Sheet BROCADE PERFORMANCE MANAGEMENT SOLUTIONS SOLUTIONS Managing and Optimizing the Performance of Mainframe Storage Environments HIGHLIGHTs Manage and optimize mainframe storage performance, while
More informationThe Service Provider s Speed Mandate and How CA Can Help You Address It
The Service Provider s Speed Mandate and How CA Can Help You Address It Welcome to the Application Economy Innovative services and business models continue to hit the market, which creates both unprecedented
More informationUSER STORY. ErgoGroup Case Study: This case study celebrates 10 years of ErgoGroup s utilisation of FDR/UPSTREAM.
We are very satisfied with FDR/UPSTREAM. Ten years on from its initial implementation, it still does exactly what we want it to do. It gives us a good, solid, fast and reliable backup of our open systems
More informationSay Yes to Virtualization: Server virtualization helps businesses achieve considerable reductions in Total Cost of Ownership (TCO).
BUSINESS SOLUTIONS Say Yes to Virtualization: Server virtualization helps businesses achieve considerable reductions in Total Cost of Ownership (TCO). Virtualizing servers is yielding dramatic cost savings
More informationAgenda. Capacity Planning practical view CPU Capacity Planning LPAR2RRD LPAR2RRD. Discussion. Premium features Future
Agenda Capacity Planning practical view CPU Capacity Planning LPAR2RRD LPAR2RRD Premium features Future Discussion What is that? Does that save money? If so then how? Have you already have an IT capacity
More informationsolution brief September 2011 Can You Effectively Plan For The Migration And Management of Systems And Applications on Vblock Platforms?
solution brief September 2011 Can You Effectively Plan For The Migration And Management of Systems And Applications on Vblock Platforms? CA Capacity Management and Reporting Suite for Vblock Platforms
More informationTableau Server 7.0 scalability
Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different
More informationHow To Improve Your It Performance
SOLUTION BRIEF IMPROVING CAPACITY PLANNING USING APPLICATION PERFORMANCE MANAGEMENT How can I ensure an exceptional end-user experience for business-critical applications and help reduce risk without over
More informationData center virtualization
Data center virtualization A Dell Technical White Paper August 2011 Lay the foundation for impressive disk utilization and unmatched data center flexibility Executive summary Many enterprise IT departments
More informationDisk Storage Shortfall
Understanding the root cause of the I/O bottleneck November 2010 2 Introduction Many data centers have performance bottlenecks that impact application performance and service delivery to users. These bottlenecks
More informationPerform-Tools. Powering your performance
Perform-Tools Powering your performance Perform-Tools With Perform-Tools, optimizing Microsoft Dynamics products on a SQL Server platform never was this easy. They are a fully tested and supported set
More informationBEST PRACTICES WHITE PAPER. Workload automation: helping cloud computing take flight
BEST PRACTICES WHITE PAPER Workload automation: helping cloud computing take flight Table OF CONTENTS executive Summary............................................... 1 Why Businesses Are Moving to the
More informationServer Migration from UNIX/RISC to Red Hat Enterprise Linux on Intel Xeon Processors:
Server Migration from UNIX/RISC to Red Hat Enterprise Linux on Intel Xeon Processors: Lowering Total Cost of Ownership A Case Study Published by: Alinean, Inc. 201 S. Orange Ave Suite 1210 Orlando, FL
More informationCloud Computing Capacity Planning. Maximizing Cloud Value. Authors: Jose Vargas, Clint Sherwood. Organization: IBM Cloud Labs
Cloud Computing Capacity Planning Authors: Jose Vargas, Clint Sherwood Organization: IBM Cloud Labs Web address: ibm.com/websphere/developer/zones/hipods Date: 3 November 2010 Status: Version 1.0 Abstract:
More informationPutting Critical Applications in the Public Cloud. The Very Latest Best Practices & Methodologies
Putting Critical Applications in the Public Cloud The Very Latest Best Practices & Methodologies Business White Paper December 2011 Introduction Many organizations are beginning to realize that there are
More informationUnprecedented Performance and Scalability Demonstrated For Meter Data Management:
Unprecedented Performance and Scalability Demonstrated For Meter Data Management: Ten Million Meters Scalable to One Hundred Million Meters For Five Billion Daily Meter Readings Performance testing results
More informationFour important conclusions were drawn from the survey:
The IBM System z10 is an evolutionary machine in the history of enterprise computing. To realize the full value of an investment in the System z10 requires a transformation in the infrastructure supporting
More informationAvoiding Performance Bottlenecks in Hyper-V
Avoiding Performance Bottlenecks in Hyper-V Identify and eliminate capacity related performance bottlenecks in Hyper-V while placing new VMs for optimal density and performance Whitepaper by Chris Chesley
More informationcan you effectively plan for the migration and management of systems and applications on Vblock Platforms?
SOLUTION BRIEF CA Capacity Management and Reporting Suite for Vblock Platforms can you effectively plan for the migration and management of systems and applications on Vblock Platforms? agility made possible
More informationA guide for creating a more secure, efficient managed file transfer methodology
Sterling Connect:Direct & SecureZIP A guide for creating a more secure, efficient managed file transfer methodology JOE STURONAS CHIEF TECHNOLOGY OFFICER, PKWARE FORREST RATLIFF SOLUTIONS ENGINEER, PKWARE
More informationCapacity planning with Microsoft System Center
Capacity planning with Microsoft System Center Mike Resseler Veeam Product Strategy Specialist, MVP, Microsoft Certified IT Professional, MCSA, MCTS, MCP Modern Data Protection Built for Virtualization
More informationTaking Virtualization
Taking Virtualization to the SMB Market How Lenovo is driving Virtualization solutions for the SMB market by lowering costs and simplifying deployment White Paper Lenovo Virtualization Solutions Introduction
More informationThought Leadership White Paper. Consolidate Job Schedulers to Save Money
Thought Leadership White Paper Consolidate Job Schedulers to Save Money Table of Contents 1 EXECUTIVE SUMMARY 2 INCREASED BUSINESS FOCUS 2 LOWER TCO 3 GREATER AGILITY 3 IMPROVED COMPLIANCE 3 REAL-WORLD
More informationCase Study I: A Database Service
Case Study I: A Database Service Prof. Daniel A. Menascé Department of Computer Science George Mason University www.cs.gmu.edu/faculty/menasce.html 1 Copyright Notice Most of the figures in this set of
More informationPARALLELS CLOUD SERVER
PARALLELS CLOUD SERVER An Introduction to Operating System Virtualization and Parallels Cloud Server 1 Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating System Virtualization...
More informationEstimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010
Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010 This document is provided as-is. Information and views expressed in this document, including URL and other Internet
More information1 INTRODUCTION 2 APPLICATION PROFILING OVERVIEW
1 INTRODUCTION The Bitfusion Profiler can be utilized to quickly understand the performance limitations of applications as well as the limitation of the instances on which these applications are deployed.
More informationThroughput Capacity Planning and Application Saturation
Throughput Capacity Planning and Application Saturation Alfred J. Barchi ajb@ajbinc.net http://www.ajbinc.net/ Introduction Applications have a tendency to be used more heavily by users over time, as the
More informationWindows Admins... & Long-term capacity planning... do the two go together?
Windows Admins... & Long-term capacity planning... do the two go together? Carl Stanfield EMC Australia Capacity management requires a few minutes of your day to crunch some numbers. The results are priceless
More informationBUSINESS IMPACT OF POOR WEB PERFORMANCE
WHITE PAPER: WEB PERFORMANCE TESTING Everyone wants more traffic to their web site, right? More web traffic surely means more revenue, more conversions and reduced costs. But what happens if your web site
More informationSAS deployment on IBM Power servers with IBM PowerVM dedicated-donating LPARs
SAS deployment on IBM Power servers with IBM PowerVM dedicated-donating LPARs Narayana Pattipati IBM Systems and Technology Group ISV Enablement January 2013 Table of contents Abstract... 1 IBM PowerVM
More informationSteps to Migrating to a Private Cloud
Deploying and Managing Private Clouds The Essentials Series Steps to Migrating to a Private Cloud sponsored by Introduction to Realtime Publishers by Don Jones, Series Editor For several years now, Realtime
More informationCICS Transactions Measurement with no Pain
CICS Transactions Measurement with no Pain Prepared by Luiz Eduardo Gazola 4bears - Optimize Software, Brazil December 6 10, 2010 Orlando, Florida USA This paper presents a new approach for measuring CICS
More informationCloud Computing (In Plain English)
Cloud Computing (In Plain English) Application Service Provider, Software as a Service, Grid Computing, Utility Computing, Platform as a Service......all these terms and more, at one time or another have
More informationTCO for Application Servers: Comparing Linux with Windows and Solaris
TCO for Application Servers: Comparing Linux with Windows and Solaris Robert Frances Group August 2005 IBM sponsored this study and analysis. This document exclusively reflects the analysis and opinions
More informationThe Journey to Cloud Computing: from experimentation to business reality
The Journey to Cloud Computing: from experimentation to business reality Judith Hurwitz, President Marcia Kaufman, COO Sponsored by IBM The Journey to Cloud Computing: from experimentation to business
More informationGetting The Most Value From Your Cloud Provider
Getting The Most Value From Your Cloud Provider Cloud computing has taken IT by storm and it s not going anywhere. According to the International Data Corporation (IDC), cloud spending will surge by 5%
More informationMainframe Performance Management: A New Twist
Mainframe Performance Management: A New Twist Produced by SearchDataCenter.com Presenter: Wayne Kernochan Sponsored by Copyright 2008 CA. All Rights Reserved. Reproduction, adaptation, or translation without
More informationKey Elements of a Successful Disaster Recovery Strategy: Virtual and Physical by Greg Shields, MS MVP & VMware vexpert
ebook Key Elements of a Successful Disaster Recovery Strategy: Virtual and Physical by Greg Shields, MS MVP & VMware vexpert Greg Shields MS MVP & VMware vexpert Greg Shields is a Senior Partner with Concentrated
More informationCloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper. www.parallels.
Parallels Cloud Server White Paper An Introduction to Operating System Virtualization and Parallels Cloud Server www.parallels.com Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating
More informationWindows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
More informationWhite paper: Unlocking the potential of load testing to maximise ROI and reduce risk.
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk. Executive Summary Load testing can be used in a range of business scenarios to deliver numerous benefits. At its core,
More informationWHITE PAPER Using SAP Solution Manager to Improve IT Staff Efficiency While Reducing IT Costs and Improving Availability
WHITE PAPER Using SAP Solution Manager to Improve IT Staff Efficiency While Reducing IT Costs and Improving Availability Sponsored by: SAP Elaina Stergiades November 2009 Eric Hatcher EXECUTIVE SUMMARY
More informationHow To Create A Virtual Data Center
THREE MUST HAVES FOR THE VIRTUAL DATA CENTER POSITION PAPER JANUARY 2009 EXECUTIVE SUMMARY Traditionally, data centers have attempted to respond to growth by adding servers and storage systems dedicated
More informationIBM DB2 Recovery Expert June 11, 2015
Baltimore/Washington DB2 Users Group IBM DB2 Recovery Expert June 11, 2015 2014 IBM Corporation Topics Backup and Recovery Challenges FlashCopy Review DB2 Recovery Expert Overview Examples of Feature and
More informationBest Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software
Best Practices for Monitoring Databases on VMware Dean Richards Senior DBA, Confio Software 1 Who Am I? 20+ Years in Oracle & SQL Server DBA and Developer Worked for Oracle Consulting Specialize in Performance
More informationEnterprise Job Scheduling: How Your Organization Can Benefit from Automation
WHITE PAPER Enterprise Job Scheduling: How Your Organization Can Benefit from Automation By Pat Cameron Introduction Today's companies need automation solutions to attain the high levels of availability,
More informationIBM z13 Software Pricing Announcements
IBM z13 Software Pricing Announcements - IBM Collocated Application Pricing (ICAP) - Country Multiplex Pricing - Technology Update Pricing for z13 January 14, 2015 IBM z13 Software Pricing Announcements
More informationVirtual Desktop Infrastructure Optimization with SysTrack Monitoring Tools and Login VSI Testing Tools
A Software White Paper December 2013 Virtual Desktop Infrastructure Optimization with SysTrack Monitoring Tools and Login VSI Testing Tools A Joint White Paper from Login VSI and Software 2 Virtual Desktop
More informationThe Business Case for Virtualization Management: A New Approach to Meeting IT Goals By Rich Corley Akorri
The BusinessCase forvirtualization Management: A New ApproachtoMeetingITGoals ByRichCorley Akorri July2009 The Business Case for Virtualization Management: A New Approach to Meeting IT Goals By Rich Corley
More informationThe business value of improved backup and recovery
IBM Software Thought Leadership White Paper January 2013 The business value of improved backup and recovery The IBM Butterfly Analysis Engine uses empirical data to support better business results 2 The
More informationCapacity planning for IBM Power Systems using LPAR2RRD. www.lpar2rrd.com www.stor2rrd.com
Capacity planning for IBM Power Systems using LPAR2RRD Agenda LPAR2RRD and STOR2RRD basic introduction Capacity Planning practical view CPU Capacity Planning LPAR2RRD Premium features Future STOR2RRD quick
More informationHOW IS WEB APPLICATION DEVELOPMENT AND DELIVERY CHANGING?
WHITE PAPER : WEB PERFORMANCE TESTING Why Load Test at all? The reason we load test is to ensure that people using your web site can successfully access the pages and complete whatever kind of transaction
More informationW W W. Z J O U R N A L. C O M o c t o b e r / n o v e m b e r 2 0 0 9 INSIDE
T h e R e s o u r c e f o r U s e r s o f I B M M a i n f r a m e S y s t e m s W W W. Z J O U R N A L. C O M o c t o b e r / n o v e m b e r 2 0 0 9 W h e r e, W h e n, A N D H o w t o D e p l o y INSIDE
More informationA Shift in the World of Business Intelligence
Search Powered Business Analytics, the smartest way to discover your data A Shift in the World of Business Intelligence Comparison of CXAIR to Traditional BI Technologies A CXAIR White Paper www.connexica.com
More informationYou re not alone if you re feeling pressure
How the Right Infrastructure Delivers Real SQL Database Virtualization Benefits The amount of digital data stored worldwide stood at 487 billion gigabytes as of May 2009, and data volumes are doubling
More informationChallenges of Capacity Management in Large Mixed Organizations
Challenges of Capacity Management in Large Mixed Organizations ASG-PERFMAN Insert Custom Session QR if Desired Glenn A. Schneck Sr. Enterprise Solutions Engineer Glenn.schneck@asg.com Topics What is ASG-PERFMAN
More informationCase Study In the last 80 years, Nationwide has grown from a small mutual auto
"The creation of a private cloud built around the z196 servers supports our business transformation goals by enabling the rapid, seamless deployment of new computing resources to meet emerging requirements."
More informationThe Importance of Software License Server Monitoring
The Importance of Software License Server Monitoring NetworkComputer How Shorter Running Jobs Can Help In Optimizing Your Resource Utilization White Paper Introduction Semiconductor companies typically
More informationAre Your Capacity Management Processes Fit For The Cloud Era?
Are Your Capacity Management Processes Fit For The Cloud Era? An Intelligent Roadmap for Capacity Planning BACKGROUND In any utility environment (electricity, gas, water, mobile telecoms ) ensuring there
More informationRapid Bottleneck Identification
Rapid Bottleneck Identification TM A Better Way to Load Test WHITEPAPER You re getting ready to launch or upgrade a critical Web application. Quality is crucial, but time is short. How can you make the
More informationVirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5
Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.
More informationWith Cloud Computing, Who Needs Performance Testing?
With Cloud Computing, Who Needs Performance Testing? Albert Witteveen, Pluton IT Insert speaker picture here, no more than 150x150 pixels www.eurostarconferences.com @esconfs #esconfs Albert Witteveen
More informationCSF Designer. Complete Customer Communication
CSF Designer Complete Customer Communication Your relationship with your customers is only as strong as the last interaction you had with them. Maybe it was face to face the ideal opportunity to do business.
More informationProactive Performance Management for Enterprise Databases
Proactive Performance Management for Enterprise Databases Abstract DBAs today need to do more than react to performance issues; they must be proactive in their database management activities. Proactive
More informationStreamlining the communications product lifecycle. By Eitan Elkin, Amdocs
From idea to Realization Streamlining the communications product lifecycle By Eitan Elkin, Amdocs contents Sigh No rest for the weary 01 Documenting the challenge 03 Requirements for a solution 07 The
More informationThe Total Cost of Ownership (TCO) of migrating to SUSE Linux Enterprise Server for System z
The Total Cost of Ownership (TCO) of migrating to SUSE Linux Enterprise Server for System z This White Paper explores the financial benefits and cost savings of moving workloads from distributed to mainframe
More informationResource Allocation and Scheduling
90 This section is about the general overview of scheduling and allocating resources. Assigning people and machines to accomplish work (work on tasks). Resource Allocation: Time-Constrained. Assigning
More informationHOW TO. to Executives. You know that marketing automation is the greatest thing since sliced bread. After all, what else can help you...
HOW TO Sell Marketing to Executives Automation You know that marketing automation is the greatest thing since sliced bread. After all, what else can help you... 1 making inroads with the corner office
More informationYOUR ERP PROJECT S MISSING LINK: 7 REASONS YOU NEED BUSINESS INTELLIGENCE NOW
YOUR ERP PROJECT S MISSING LINK: 7 REASONS YOU NEED BUSINESS INTELLIGENCE NOW THERE S NO GOOD REASON TO WAIT Enterprise Resource Planning (ERP) technology is incredibly useful to growing organizations,
More informationDB2 for z/os Backup and Recovery: Basics, Best Practices, and What's New
Robert Catterall, IBM rfcatter@us.ibm.com DB2 for z/os Backup and Recovery: Basics, Best Practices, and What's New Baltimore / Washington DB2 Users Group June 11, 2015 Information Management 2015 IBM Corporation
More informationResponse Time Analysis
Response Time Analysis A Pragmatic Approach for Tuning and Optimizing SQL Server Performance By Dean Richards Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 866.CONFIO.1 www.confio.com
More informationCustom Systems Corp.
ABOUT Company Overview No company builds a 40-year reputation for excellence overnight. We began life in 1973, providing payroll and accounts payable services. Since then CSC has grown and expanded, anticipating
More informationThe Association of System Performance Professionals
The Association of System Performance Professionals The Computer Measurement Group, commonly called CMG, is a not for profit, worldwide organization of data processing professionals committed to the measurement
More informationMake the right decisions with Distribution Intelligence
Make the right decisions with Distribution Intelligence Bengt Jensfelt, Business Product Manager, Distribution Intelligence, April 2010 Introduction It is not so very long ago that most companies made
More informationOperations Management for Virtual and Cloud Infrastructures: A Best Practices Guide
Operations Management for Virtual and Cloud Infrastructures: A Best Practices Guide Introduction Performance Management: Holistic Visibility and Awareness Over the last ten years, virtualization has become
More informationCRM SOFTWARE EVALUATION TEMPLATE
10X more productive series CRM SOFTWARE EVALUATION TEMPLATE Find your CRM match with this easy-to-use template. PRESENTED BY How To Use This Template Investing in the right CRM solution will help increase
More informationCloud Computing Payback. An explanation of where the ROI comes from
Cloud Computing Payback An explanation of where the ROI comes from November, 2009 Richard Mayo Senior Market Manager Cloud Computing mayor@us.ibm.com Charles Perng IBM T.J. Watson Research Center perng@us.ibm.com
More informationWhite Paper. Using Linux on z/vm to Meet the Challenges of the 21st Century
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com 212.367.7400 White Paper Using Linux on z/vm to Meet the Challenges of the 21st Century Printed in the United States of America Copyright
More informationBridgeWays Management Pack for VMware ESX
Bridgeways White Paper: Management Pack for VMware ESX BridgeWays Management Pack for VMware ESX Ensuring smooth virtual operations while maximizing your ROI. Published: July 2009 For the latest information,
More informationThe Flash- Transformed Server Platform Maximizing Your Migration from Windows Server 2003 with a SanDisk Flash- enabled Server Platform
WHITE PAPER The Flash- Transformed Server Platform Maximizing Your Migration from Windows Server 2003 with a SanDisk Flash- enabled Server Platform.www.SanDisk.com Table of Contents Windows Server 2003
More informationUnderstanding the Performance of an X550 11-User Environment
Understanding the Performance of an X550 11-User Environment Overview NComputing's desktop virtualization technology enables significantly lower computing costs by letting multiple users share a single
More informationResource Monitoring During Performance Testing. Experience Report by Johann du Plessis. Introduction. Planning for Monitoring
Resource Monitoring During Performance Testing Experience Report by Johann du Plessis Introduction During a recent review of performance testing projects I completed over the past 8 years, one of the goals
More informationFive Key Principles of Conversion-Focused Website Design
Five Key Principles of Conversion-Focused Website Design 2015 G5, MF0313 Introduction Did you know that 97 percent of consumers use search engines to research products and resources? 1 Now more than ever,
More informationDeveloping a Load Testing Strategy
Developing a Load Testing Strategy Michele Ruel St.George Bank CMGA 2005 Page 1 Overview... 3 What is load testing?... 4 Scalability Test... 4 Sustainability/Soak Test... 4 Comparison Test... 4 Worst Case...
More information