# THE COE PERFORMANCE METHOD

Size: px
Start display at page:

## Transcription

1 FROM THE CENTER OF EXPERTISE THE COE PERFORMANCE METHOD A PERFORMANCE METHODOLOGY FOR ENTERPRISE-WIDE INFORMATION SYSTEMS Roger Snowden Center of Expertise, Oracle Corporation 2002 Oracle Corporation, all rights reserved

3 description suggests a level of complexity that might discourage the non-mathematician, it is not necessary to have a mathematics background to develop a reasoned understanding of the principles involved. The fundamental equation we need to understand is this: Response Time = Service Time + Wait Time Response time refers to the total time a process consumes, start to finish. In a rush hour traffic example, response time would be measured from the time a car entered a freeway to the time it left an off-ramp. In a retail service scenario, it might be from the time a customer gets into a bank teller s line (to cash a check, perhaps) to the moment cash is in hand. Service time is the amount of time consumed by the process itself the teller s busy time. Wait time refers to the time spent in line waiting for service. Optimal processes have minimal service and wait times. The target in the performance method discussed here is overall response time. For the most part, the focus will be on the causes of wait time, but by no means will service time be ignored. Most of us already understand these concepts, and we only need to observe the events of our daily lives to reinforce this understanding. Consider the commuter driving to work during rush hour on a typical morning. If traffic is moving rapidly, but congestion is heavy and cars are close together, a simple near miss caused by one car stopping suddenly can create instant havoc. As following cars are forced to brake suddenly, even more cars further back are affected and are forced to slam on their brakes. The effect ripples backward through the highway perhaps for miles. Even if the original incident involves no actual damage and traffic at that initial site begins moving again immediately, the delaying after-effects are likely to continue for perhaps an additional hour. Once congestion has set in, it seems to feed on itself long after the cause of the bottleneck is removed. It may be impractical to attempt to solve all of the mathematical equations demonstrating the various events and collective consequences, but certainly the rush hour driving experience reinforces the conclusion that a relatively small event can have severe performance consequences. As with traffic jams, computer systems suffer similar congestion. Service time deserves some consideration. In the case of a database application, a session s process might be found to spend too much service time, in the form of CPU time, processing extra data blocks because of the lack of a proper index on a particular table. That is, the server performs a full table scan instead of an index range scan, retrieving many more data blocks than otherwise would be necessary. While this additional work might be initially regarded as service time indeed, each block retrieval operation will consist of some CPU processing time the operation will involve even more I/O wait time as the user s process must wait for each additional block s disk read requests. So, while the full table scan certainly incurs additional CPU service time, the symptom of poor performance will most obviously be exhibited by excessive wait time (disk I/O) rather than service (CPU) time. Consider another example from daily life: the junk food lunch. We drop by our favorite hamburger restaurant for a quick bite and are faced with three lines of people waiting to order food from three employees acting as servers. Which line do we choose? Almost automatically, we choose the shortest line available. After several minutes, we notice someone who arrived after us is being served before us. It dawns on us the person serving our line might still be in training. It takes that person about twice as long to fill an order as the more experienced workers. So, we intuitively understand service time the time it takes to actually take and fill an order is a vital component of response time. Response time in this case is the time it takes to get our food in hand, starting from the moment we step into line in the restaurant. Another example of the importance of wait time as a primary measure of poor performance would be CPU time consumed by excess SQL parsing operations. A well-designed application will not only make use of sharable SQL and avoid hard parses, but will also avoid soft parses by keeping frequently used cursors open for immediate execution without reparsing at all neither hard nor soft. A poorly designed application will certainly exhibit a high percentage of parse time CPU, but will probably also incur a disproportionate amount of time waiting for latches, most notable the library cache latch. As such, even a highly CPU- THE COE PERFORMANCE METHOD 3

4 consumptive process is likely to cause measurable disproportionate waits. So, while service time must be monitored, performance problems are more likely to be quickly spotted by focusing on wait time. CPM as presented here takes a holistic approach to performance analysis and encourages the analyst to concentrate on service time or wait time as appropriate for the situation at hand. If the real problem is service-time related rather than wait time, it will be indicated by CPM and its cause corrected. Although the earlier automobile traffic example is easy to understand, the importance of wait time is all too easy to forget when dealing with the abstractions of computer software. However, that example can highlight how a database server might have a buffer cache hit ratio of ninety-nine percent and at the same time exhibit abysmal response time. Or, how a large parallel query might take too long to complete while CPU consumption mysteriously drops to near-idle levels. When the CPU is not working, it is waiting. VARIANCE, UTILIZATION AND CAPACITY Queuing analysis is helpful in understanding resource utilization and for optimizing service levels. In queuing analysis, the exact timing of an event is not always known. Customer arrivals, or computer users clicking the submit button to invoke a database request tend not to be uniformly timed, and often come in groups. This is a common statistical phenomenon known as variance. It is simpler and more effective to instead deal with the aggregation of events and construct a mathematical model based on the probability of each event. Since customer arrival times and hamburger preparation times vary, a model can take the form of a graph illustrating the effects of congestion, or busy-ness. From that model, an analysis can be performed of response time, throughput, or the nature of a bottleneck. The manager of the hamburger restaurant knows from experience that people arrive at random intervals. That is, while there might be an average of three customers per minute during the mid-morning hours, people don t actually arrive at exactly twenty- second intervals. They come in groups and as individuals at unpredictable times. Thus, variances in arrival rates may have an effect on our response time. An idle resource, like an employee or a CPU, is often seen as wasted capacity. However, having an occasionally idle resource may be the price one pays to provide a level of service needed to be competitive. Similarly, the freeway we use to drive to work during rush hour may have several lanes idle at two o clock in the morning. During rush hour, all lanes may be full and backed up. Extra slack or capacity is traded off for busy-time response and throughput. In computing systems, congestion can be experienced as either idle CPU time, or growing process run queues; unused memory or swapping virtual memory; idle or busy disk. We may not be able to determine precisely how many users will be logged on at one time or exactly what the workload will be, so we may have to provide some margin of extra capacity in order to get our business completed on time. In a large enterprise, the queuing model presents itself within the measure of end-to-end application response. A user pressing a mouse button in an office may be physically and logically miles from the data of business interest. The total time a user waits before their screen responds with a display of data is the sum total time for each system component between that mouse and the distant repository of data and well as the return trip. Each component of technology has its own process model and is a potential contributor to response delay. We will refer to these interconnected technology components as technology stacks. Examples include the network, database server, application server, the underlying hardware machines, and their operating systems. With a basic understanding of queuing theory, we need to develop a way to apply it to the technology problem at hand. We need to have access to information which tells us when system components are busy, how busy they are, and what they are waiting for when they are not busy. Fortunately, there are numerous sources for this information. All we need is to identify them and to find a cohesive way to bring this information together in an understandable manner. Although each of these stacks consists of sub-processes, each with their own queuing models, we can view the overall stack as an aggregate process and consider its response as a unit. For the Oracle Database Server THE COE PERFORMANCE METHOD 4

5 there exist a number of statistical values available for report, called wait events, indicating the presence or absence of internal bottlenecks. Measuring changes in the performance of an Oracle database involves viewing these wait events by value of time waited and comparing these wait times to the same measure from a different time period. Other stacks involved in the end-to-end application view typically have tools to provide similar information. We will discuss some of those tools in more detail later. Let s now forge on to the practical details of diagnosing performance issues. THE ENGINEERING APPROACH Certainly, the need for engineering discipline in the deployment and management of mission critical applications is well understood. Such discipline may be currently less widely applied toward performance management than other areas of enterprise technology, but an engineering approach to the performance of an application is equally as important as engineering an initial deployment. While practices vary from enterprise to enterprise, certain key practices have been identified by Oracle s Center of Expertise as essential to effective performance management. First among these is the establishment of a Service Level Agreement (SLA). It is beyond the scope of this paper to fully define the nature of such an agreement. Nevertheless, it is clear that in order to declare a particular aspect of system performance as bad, one must first have a clear definition of good. One goal of the COE Performance Method described here is to achieve the performance commitments of the SLA and to diagnose variances from that SLA. SERVICE LEVEL AGREEMENT Since an SLA is an agreement between a technology service provider and a user, it tends to be a bottom-line document. That is, the agreement is for a particular specification of availability and performance for a technology-based service. As such, it tends to focus on end-to-end service and does not bother with the interconnected details in the middle. It is up to the technology provider to understand and define the interconnected components (stacks) and to support them. Technology stacks in a contemporary information environment will include database servers, application servers, hardware and operating system platforms on which to run those servers, network components such as routers, hubs, gateways and firewalls, and workstations with user interface software for end users. Each stack has its own set of support issues and available tools for management. In order to be able to effectively respond to reactive performance issues, the service provider should take a proactive approach. The tools and techniques needed to diagnose wait time versus service time for each technology stack must be implemented and in place, and they should be well understood by the service provider prior to any actual performance diagnostic engagement. This deployment includes not only the tools, but also the engineering training and support to use them. Oracle Database Server from version and beyond has been shipped with a tool called Statspack. Statspack is specifically designed to monitor server performance and offers a high level view of server wait events the key to tracking down database performance bottlenecks. Operating system tools such as sar, netstat, glance, vmstat and iostat, among others, are also available on most UNIX platforms and are quite effective in combination with Statspack for overall proactive diagnostic monitoring. Windows NT and its successors, Windows 2000 and Windows XP also come packaged with performance monitoring tools. Third party tools are also available and many are quite effective, although they generally have a price tag associated with them. Statspack is available free of charge, as is usually the case with the operating system tools mentioned above. PERFORMANCE BASELINE REFERENCE Whatever our toolset choices, we need to use those tools to establish and maintain a performance metric baseline. This takes the form of actual performance data gathered at appropriate times, using tools such as those already mentioned, to establish some measurable norm. A baseline might consist of an elaborate set of gathered data, or may be as simple as a benchmark timing of a standard query. The important characteristic of the baseline is that it is consistent and offers a reasonable basis of comparison. Data gathered should THE COE PERFORMANCE METHOD 5

6 represent actual system performance taken during one or more periods of busy activity. A baseline of data gathered while the system is idle is of little use. The baseline will need to be maintained as the system evolves, with respect to workload, functionality and configuration. If you add new application features, upgrade the database version or add or replace CPUs or other hardware, the environment has changed and therefore performance may have changed. If the baseline is not reestablished, any understanding of a future performance complaint by the user community will be compromised and blurred one will not be able to know if a performance change is due to a configuration issue or is a bug introduced with a new application feature. The baseline is established for this system in this environment and enables a comparative analysis to be made to diagnose a specific problem. The issue of the performance complaint itself is worthy of some note. One of the problems inherent with managing complex systems is the uncertainty of the performance metric. Performance is largely a matter of perception. A user may decide one day that a two second response for the execution of a particular form is acceptable, but unacceptable the next day, depending on issues like how hurried or relaxed the user feels on a particular day. This suggests the information used for the reference baseline needs to be coordinated with the metrics used for the SLA. Even though performance complaints may still be lodged, at least the system or database administrator has either a defense to offer or a starting point to diagnose the issue. ENGINEERING A SIMPLE METHOD One of the best features of the COE Performance Methodology is that it lends itself to performance analysis of large systems of interconnected technology stacks. Since our premise is that a system is no faster than its worst bottleneck, it is obviously important to be able to identify the location of that bottleneck. Moreover, although Oracle tends to be the common denominator from the perspective of users and management alike, we know from experience bottlenecks can just as well reside in the network, the application server, or an operating system. In order to identify the problem technology stack, and ultimately the actual problem itself, we need a systematic approach. The essential steps of the CPM approach, illustrated in Figure 1, will now be discussed briefly. The COE Performance Methodology, in a nutshell Problem Statement Information Gathering / Stack Identification Stack Drill-Down Fix the Stack Test Against Baseline Repeat Until Complete Figure 1 As illustrated, the basic steps of the COE Performance Methodology are straightforward. By starting at a high level, broad view of the enterprise system and rigorously following the steps in an orderly manner, positive results are achieved simply, quickly and without expensive and timeconsuming guesswork. THE COE PERFORMANCE METHOD 6

8 language such as perl to analyze the text output. The tool can then phone home when exceptions are encountered or predefined thresholds exceeded. Having an integrated monitoring environment will facilitate rapid and accurate stack identification during a performance crisis. While elaborate third party tools are available for such an infrastructure, off-the-shelf and freeware tools are often entirely adequate, although any tools you choose will have to be integrated into your environment. For example, each UNIX platform in the enterprise might have a scheduled process to gather sar and netstat statistics on regular intervals. If Statspack snapshots are also collected at similar times, it is a simple matter to analyze reports for those tools for a period of concern and compare the available data to reports from, say, exactly one week or one month earlier. If the application workload is similar for both periods, but the performance problem did not exist in the earlier period, we have a fast way to compare bad performance data to baseline data. If the problem is with the underlying UNIX platform or the network, it should be apparent immediately. Even without the baseline, a trained technician will recognize symptoms of constraint a high percentage of CPU wait time or process swapping activity, for example. See Figure 2 for an example of vmstat output. If no obvious starting point presents itself, we recommend you start with the database server itself. One obvious reason is the database administrator understands that stack best. Another advantage is the Oracle server gathers and provides information offering clues to problems across other stacks. For example, network problems often show up as a specific Oracle wait event, sql*net more data to client. Knowing the response time through the database stack will allow you to determine whether most of the overall response time is spent in the database or not. This in turn will direct your attention to the database itself or to another stack. THE COE PERFORMANCE METHOD 8

9 \$ vmstat 5 5 procs memory page disk faults cpu r b w swap free re mf pi po fr de sr s0 s1 s2 s3 in sy cs us sy id This is a vmstat sample taken from a 32-processor Sun system for five intervals of five seconds each. Statistical sampling is such that we ignore the first line of vmstat. A quick glance under the procs section tells us there is some process run queue wait time ( r is either 0 or 1 in this example) and some resource waiting ( b > 0 for most interval samples). This is generally considered good, nonbottlenecked performance although the b value indicates a process blocked by an IO wait, so disk may need balancing if that b value grows. Run queues are averaged for all CPUs for Solaris. Memory paging and swapping are not the same. Paging, even with these seemingly large numbers, is quite normal. The sr column tells you how often the page scanner daemon is looking for memory pages to reclaim, shown in pages scanned per second. Consistent high numbers here (> 200) are a good indication of real (not virtual) memory shortage. The fields displayed are: procs Report the number of processes in each of the three following states: r in run queue b blocked for resources (I/O, paging, and so forth) w runnable but swapped memory report on usage of virtual and real memory. swap amount of swap space currently available (Kbytes) free size of the free list (Kbytes) page Report information about page faults and paging activity, in units per second. re page reclaims mf minor faults pi kilobytes paged in po kilobytes paged out fr kilobytes freed de anticipated short-term memory shortfall (Kbytes) sr pages scanned by clock algorithm disk Report the number of disk operations per second, per disk unit shown. faults Report the trap/interrupt rates (per second). in (non clock) device interrupts sy system calls cs CPU context switches cpu Give a breakdown of percentage usage of CPU time. On MP systems, this is an average across all processors. us user time sy system time id idle time Figure 2 TIMING IS EVERYTHING An important consideration when evaluating third party tools or rolling your own is to gather and analyze data in a meaningful manner. For the most part, we are dealing with statistical samples when we monitor hardware and software resources, so sampling techniques must be sensible with respect to sample size and THE COE PERFORMANCE METHOD 9

10 interval. The vmstat report shown in Figure 2 was taken at five-second intervals. While short intervals show performance spikes quite well, they also tend to exaggerate variances in values and therefore contain statistical noise. A better method is to take concurrent short and long samples to be able to analyze both averages and variances to get a meaningful picture of performance. \$ iostat xtc device extended device statistics tty cpu r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id sd sd sd sd sd sd sd sd This as an abbreviated iostat report from the same 32 processor system as shown in Figure 1. The svc_t column is actually the response time for the disk device, however misleading the name. When looking for input/output bottlenecks on disks, a rule of thumb is to look for response time greater than 30 milliseconds for any single device. A well-buffered and managed disk system can show response times under 10 milliseconds. Here are the field names and their meanings: Figure 3 device r/s w/s kr/s kw/s wait actv svc_t name of the disk reads per second writes per second kilobytes read per second kilobytes written per second average number of transactions waiting for service (queue length) average number of transactions actively being serviced average service time, in milliseconds %w percent of time there are transactions waiting for service (queued) %b percent of time the disk is busy (transactions in progress) A sudden burst of activity might cause a single disk drive to be so busy as to cause process queuing, yet may not be of any real concern unless it become chronic. On the other hand, long iostat samples will average disk service time and tend to hide frequent spikes and could possibly mask a real problem. See figure 4 for an example of a CPU resource measurement illustrating how large variances in reported data can be misleading. If you look at the data for too short an interval, you might conclude CPU idle time is nearly seventy percent or nearly as low as twenty percent. If you are trying to analyze a performance anomaly during a period of high or low CPU usage, such a narrow slice of data can be quite helpful. On the other hand, taken as an indication of the norm, such a microscopic view could be completely misleading. The first priority at this early juncture is to eliminate obvious problems that can skew performance data and blur the analysis. We are concerned with quickly ascertaining the overall health of the components of each technology stack to make sure we know where the possible problem both is and isn t. We do this by looking for exceptions to what we know to be normal behavior. THE COE PERFORMANCE METHOD 10

11 CPU Idle Time :51:30 7:06:30 Low 7:21:30 7:36:30 7:51:31 8:06:31 8:21:31 High 8:36:31 8:51:31 9:06:31 9:21:32 9:36:32 %idle trendline CPU Idle times extracted from a sar report. The jagged line represents samples taken at fifteen-minute intervals. The trend line is shown to illustrate the degree to which variances among individual samples can be distracting and misleading. You need both average and variance information to get a true picture of what is happening at the hardware and operating system levels. The interval marked Low is entirely different from the interval marked High. A narrow peek at a performance variation can be useful for analyzing bottlenecks, but can be misleading if taken as an indication of the norm. Figure 4 For example, perhaps we received a report that an Oracle server had severe latch free wait events during a period of bad performance. If we respond directly to that symptom without adequate high-level analysis of the overall platform/database technology stack, we might overlook heavy process queuing at the operating system level. That is, the Oracle database might appear to be the problem, when the real issue is a lack of capacity. Reports from vmstat or iostat would indicate chronic process run queues, so we would know that the Oracle database itself is probably not the culprit, at least not the primary culprit. Once the resource limit is addressed, by tuning the application, rescheduling processes or adding more or faster processors, we can proceed once again with the stack analysis and identify server constraints in their proper context. THE COE PERFORMANCE METHOD 11

12 tracert mail12 Tracing route to mail12.us.snerdley.com [ ]over a maximum of 30 hops: 1 <10 ms <10 ms 10 ms whq17jones-rtr-755f1-0-a.us.snerdley.com [ ] 2 <10 ms <10 ms <10 ms whq4op3-rtr-714-f0-0.us.snerdley.com [ ] ms 210 ms 231 ms mail12.us.snerdley.com [ ] Trace complete. Sample tracert used to identify potential network problems. Coupled with ping, a number of common issues can be quickly identified. Ping each device shown in tracert, with the don t fragment bit set and a large packet size to isolate individual segment performance. Although tracert shows timing information, it is for very small packets and may not isolate bottlenecks, so ping is used in conjunction with tracert. Figure 5 STACK DRILL-DOWN Each technology stack is then analyzed in detail to ascertain the source of the bottleneck. Since this effort is specific to each stack, the exact drill-down techniques are beyond the scope of this introductory paper. A network analysis, for example, might involve the services of a network administrator and the use of a network sniffer. Figure 5 shows an example of using the tracert utility to analyze network performance. The specific techniques for each stack differ greatly and have to be developed and supported specifically for each environment. For the Oracle database server, a tool such as Statspack or Oracle Enterprise Manager can provide a focused accounting of wait events. Sample the data for a narrow, busy period. One of the most common errors for database statistics gathering is to assume more is better and to take a sampling for too long a period. If the performance symptoms appear for fifteen minutes each hour, then an hour long sample of data only averages the wait events and hides the real cause of the problem. Data gathered should represent actual activity during the most pronounced performance symptoms for the clearest picture. Create the associated report, such as Statspack for the database server, and review it for the top wait events, in order of time waited. Each of those events will provide evidence of the cause of the problem and will provide a path for further drill-down. Much information is available to discover the significance of each wait event in the context of Oracle s internal operations and it is up to the individual performance analyst to learn how to interpret and respond to wait event statistics. Although many tools purport to offer tuning advice, there is no substitute for individual knowledge and training. A good source of information for database wait events is Anjo Kolk s YAPP paper referenced in the bibliography. A note of caution is due here. There exists a blurry distinction between capacity planning and performance management. The two subjects are tightly intertwined. One of the important techniques required when engaging in performance analysis is to properly distinguish between a capacity problem and an actual performance problem. If a problem crops up slowly over time in the form of gradual performance degradation as workload grows naturally, the problem is a matter of capacity, not performance per se. A performance issue is a technical matter to be dealt with in a primarily technical manner, while a capacity problem quickly turns into a business decision. If a server needs additional capacity, such capacity must be purchased or done without. FIX THE STACK Having identified the worst bottleneck, it is now time to apply an appropriate remedy. Again, as an introduction to the COE Performance Methodology, it is beyond our scope to attempt to list possible fixes here. You may have identified a bug or a matter of human error, or a hardware failure. Whatever the cause, use your engineering, and perhaps diplomatic skills to get it fixed. THE COE PERFORMANCE METHOD 12

13 TEST AGAINST BASELINE Now that the single bottleneck has been identified and relieved, it is time to rerun the test case and compare to the baseline and SLA to establish relative success. We use the term relative here to suggest the problem might not be altogether solved. It is common to find the relief of one bottleneck only serves to reveal another. If you have achieved success, document that fact, stop tuning and go home. You do get to go home, don t you? Performance management is, of course, an ongoing process. This is not meant to suggest the diagnostician will walk away and not continue to monitor performance. On the contrary, proactive monitoring is the best way to avoid emergencies. It is important, however, to distinguish between reactive and proactive efforts and not to be caught in the trap of managing one crisis into the next. After the crisis is resolved, review performance against the baseline and update the baseline if hardware or software configurations have changed. Continue to monitor proactively. REPEAT UNTIL COMPLETE If success, as defined by agreement established in the problem statement, is not yet declared, go back to the second step above and rerun the analysis to identify the stack now containing the worst bottleneck. Consider the possibility the bottleneck has moved to another stack. It is also possible there is no ready relief for the problem. This may be a case where a performance problem is actually a capacity issue, in which case an investment decision may need to be made. Alternatively, the root of the problem may be a bug or a hardware failure for which there is no immediate solution. Often one symptom will mask another. It is not uncommon for multiple, unrelated problems to manifest themselves at the same time. In a recent engagement involving a sudden and dramatic increase in response time in a production database, heavy contention was discovered within the file system. Once several large objects were moved to other, less busy disk drives, throughput increased fourfold, but response time for individual users was still slow. Further investigation from the top down revealed certain SQL statements did not properly use an index. Both issues surfaced at the same time because of the introduction of a new business transaction type causing a concentration of activity on the affected disk objects, while at the same time invoking SQL statements not previously executed. Once the SQL statement was corrected to be more selective, performance returned to normal, acceptable levels and the engagement ended. Performance problems are like onions: you peel them one layer at a time. TOOLS TO DO THE JOB In order to perform the multiple levels of diagnostics required for each stack, a number of tools will be needed. Commercial software and hardware products are available from various vendors and free software tools abound. It is beyond the scope of this paper to attempt to identify all such tools, but some obvious sources are hardware and software vendors as well as the various open source consortia. Commonly used diagnostics tools mentioned already include sar, iostat, vmstat, netstat and ping for UNIX platforms. Some tools offer varying degrees of comprehensiveness and integration. Naturally, an integrated tool is likely to be more convenient to implement than a set of point-solution tools. For Oracle servers, obvious choices include Oracle Enterprise Manager (EM), the utlbstat/utlestat scripts, and Statspack. EM has features incorporating the basic methodology described here. Utlbstat/utlestat and Statspack have the virtue of being included with the server at no extra charge. Statspack has been shipped with Oracle database servers since and is intended as a replacement for the utlbstat/utlestat scripts. It offers excellent and comprehensive features for ongoing monitoring of the database. All of these tools will report data for selected intervals and will provide a view of the wait event interface built into the Oracle server kernel. THE COE PERFORMANCE METHOD 13

14 A MEASURE OF DIPLOMACY Besides tools to cover the technology spectrum under your domain, you will also need occasional cooperation from other experts. One of the more common problems of the contemporary enterprise is a direct outgrowth of the integration of disparate technologies communication barriers. Often, the administrators of the database, hardware platform and the network belong to entirely different management structures. While a performance methodology such as this cannot address political turfs, cooperation is necessary to quickly diagnose potentially complex problems. BACK TO THE CONCEPTS MANUAL An understanding of Oracle concepts is fundamental to effective performance analysis. Have you read the Concepts Manual lately? An understanding of all components of the Oracle server is contained in that material, including Buffer Cache operations, enqueues, latches, the Library Cache, the Shared Pool, redo, undo; lgwr, dbwr and smon background processes. Oracle 9i documentation includes Oracle9i Database Performance Methods, which along with Oracle9i Performance Guide and Reference provides an in depth discussion of server and application tuning. For technology stacks other than the database, there is a wealth of material to read. Some excellent sources are listed in the bibliography below. Bear in mind some of them are written from the perspective of a particular operating system, but contain concepts applicable to all brands and flavors of platform. Documents are available on the Oracle Technical Network site providing an understanding of the wait events Oracle records to provide the queuing analysis perspective you need to apply this methodology and to tune the database product effectively. There is a discussion of Oracle wait events, in some detail, as well as an introduction to wait event analysis known as Yet Another Performance Profiling Method (YAPP), by Anjo Kolk. Also, Oracle9i Database Performance Methods applies the holistic approach to the database in particular. Both are well worth reading. See the Bibliography for details and additional reading. ACKNOWLEDGEMENTS The Center of Expertise Performance Methodology has been a collaborative work of many individuals. Current and former members of COE, including Jim Viscusi, Ray Dutcher, Kevin Reardon and others, provided much of the early research. Cary Millsap offered the theoretical foundation for this effort. BIBLIOGRAPHY Practical Queueing Analysis, Mike Tanner, McGraw-Hill Book Company (out of print in the United States, but a classic worth finding, available at Amazon s United Kingdom site) The Art of Computer Systems Performance Analysis Techniques for Experimental Design, Measurement, Simulation, and Modeling, Raj Jain, John Wiley & Sons Capacity Planning for Web Performance, Daniel A. Menasce, Virgilio A. F. Almeida, Prentice Hall Oracle8i Designing and Tuning for Performance Release 2 (8.1.6), Oracle Corporation, part A Oracle9i Database Performance Methods, Oracle Corporation, part A Oracle9i Database Performance Guide and Reference, Oracle Corporation, part A Sun Performance and Tuning, Java and the Internet, Adrian Cockcroft, Richard Pettit, Sun Microsystems Press, a Prentice Hall Title Oracle Performance Tuning 101, Gaja Krishna Vaidyanatha, Kirtikumar Deshpande, John A. Kostelac, Jr., Oracle Press, Osborne/McGraw-Hill Oracle Applications Performance Tuning Handbook, Andy Tremayne, Oracle Press, Osborne/McGraw-Hill Yet Another Performance Profiling Method (YAPP), Anjo Kolk, THE COE PERFORMANCE METHOD 14

### Response Time Analysis

Response Time Analysis A Pragmatic Approach for Tuning and Optimizing Oracle Database Performance By Dean Richards Confio Software, a member of the SolarWinds family 4772 Walnut Street, Suite 100 Boulder,

### Response Time Analysis

Response Time Analysis A Pragmatic Approach for Tuning and Optimizing Database Performance By Dean Richards Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 866.CONFIO.1 www.confio.com Introduction

### Response Time Analysis

Response Time Analysis A Pragmatic Approach for Tuning and Optimizing SQL Server Performance By Dean Richards Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 866.CONFIO.1 www.confio.com

### Application Performance Testing Basics

Application Performance Testing Basics ABSTRACT Todays the web is playing a critical role in all the business domains such as entertainment, finance, healthcare etc. It is much important to ensure hassle-free

### Windows Server Performance Monitoring

Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

### SAS Application Performance Monitoring for UNIX

Abstract SAS Application Performance Monitoring for UNIX John Hall, Hewlett Packard In many SAS application environments, a strategy for measuring and monitoring system performance is key to maintaining

### TUTORIAL WHITE PAPER. Application Performance Management. Investigating Oracle Wait Events With VERITAS Instance Watch

TUTORIAL WHITE PAPER Application Performance Management Investigating Oracle Wait Events With VERITAS Instance Watch TABLE OF CONTENTS INTRODUCTION...3 WAIT EVENT VIRTUAL TABLES AND VERITAS INSTANCE WATCH...4

### Network Management and Monitoring Software

Page 1 of 7 Network Management and Monitoring Software Many products on the market today provide analytical information to those who are responsible for the management of networked systems or what the

### Oracle Database 11g: Performance Tuning DBA Release 2

Oracle University Contact Us: 1.800.529.0165 Oracle Database 11g: Performance Tuning DBA Release 2 Duration: 5 Days What you will learn This Oracle Database 11g Performance Tuning training starts with

### White Paper. The Ten Features Your Web Application Monitoring Software Must Have. Executive Summary

White Paper The Ten Features Your Web Application Monitoring Software Must Have Executive Summary It s hard to find an important business application that doesn t have a web-based version available and

### THE CONVERGENCE OF NETWORK PERFORMANCE MONITORING AND APPLICATION PERFORMANCE MANAGEMENT

WHITE PAPER: CONVERGED NPM/APM THE CONVERGENCE OF NETWORK PERFORMANCE MONITORING AND APPLICATION PERFORMANCE MANAGEMENT Today, enterprises rely heavily on applications for nearly all business-critical

### MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect ashutosh_shinde@hotmail.com Validating if the workload generated by the load generating tools is applied

### Best Practices for Managing Virtualized Environments

WHITE PAPER Introduction... 2 Reduce Tool and Process Sprawl... 2 Control Virtual Server Sprawl... 3 Effectively Manage Network Stress... 4 Reliably Deliver Application Services... 5 Comprehensively Manage

### Optimizing Your Database Performance the Easy Way

Optimizing Your Database Performance the Easy Way by Diane Beeler, Consulting Product Marketing Manager, BMC Software and Igy Rodriguez, Technical Product Manager, BMC Software Customers and managers of

### Analyzing IBM i Performance Metrics

WHITE PAPER Analyzing IBM i Performance Metrics The IBM i operating system is very good at supplying system administrators with built-in tools for security, database management, auditing, and journaling.

### SQL Server Performance Intelligence

WHITE PAPER SQL Server Performance Intelligence MARCH 2009 Confio Software www.confio.com +1-303-938-8282 By: Consortio Services & Confio Software Performance Intelligence is Confio Software s method of

### Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment

Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment TrueSpeed VNF provides network operators and enterprise users with repeatable, standards-based testing to resolve complaints about

### TELE 301 Network Management

TELE 301 Network Management Lecture 22: Diagnostics & Ethics Haibo Zhang Computer Science, University of Otago TELE301 Lecture 22: Diagnostics & Ethics 1 Fault Management Fault management It means preventing,

### Objectif. Participant. Prérequis. Pédagogie. Oracle Database 11g - Performance Tuning DBA Release 2. 5 Jours [35 Heures]

Plan de cours disponible à l adresse http://www.adhara.fr/.aspx Objectif Use the Oracle Database tuning methodology appropriate to the available tools Utilize database advisors to proactively tune an Oracle

### Recommendations for Performance Benchmarking

Recommendations for Performance Benchmarking Shikhar Puri Abstract Performance benchmarking of applications is increasingly becoming essential before deployment. This paper covers recommendations and best

### Product Review: James F. Koopmann Pine Horse, Inc. Quest Software s Foglight Performance Analysis for Oracle

Product Review: James F. Koopmann Pine Horse, Inc. Quest Software s Foglight Performance Analysis for Oracle Introduction I ve always been interested and intrigued by the processes DBAs use to monitor

### BridgeWays Management Pack for VMware ESX

Bridgeways White Paper: Management Pack for VMware ESX BridgeWays Management Pack for VMware ESX Ensuring smooth virtual operations while maximizing your ROI. Published: July 2009 For the latest information,

### Oracle Database 11 g Performance Tuning. Recipes. Sam R. Alapati Darl Kuhn Bill Padfield. Apress*

Oracle Database 11 g Performance Tuning Recipes Sam R. Alapati Darl Kuhn Bill Padfield Apress* Contents About the Authors About the Technical Reviewer Acknowledgments xvi xvii xviii Chapter 1: Optimizing

### RESPONSE TIME ANALYSIS. A Pragmatic Approach for Tuning and Optimizing Database Performance

RESPONSE TIME ANALYSIS A Pragmatic Approach for Tuning and Optimizing Database Performance INTRODUCTION For database administrators the most essential performance question is: how well is my database running?

### 1. This lesson introduces the Performance Tuning course objectives and agenda

Oracle Database 11g: Performance Tuning The course starts with an unknown database that requires tuning. The lessons will proceed through the steps a DBA will perform to acquire the information needed

### Basic Tuning Tools Monitoring tools overview Enterprise Manager V\$ Views, Statistics and Metrics Wait Events

Introducción Objetivos Objetivos del Curso Basic Tuning Tools Monitoring tools overview Enterprise Manager V\$ Views, Statistics and Metrics Wait Events Using Automatic Workload Repository Managing the

### features at a glance

hp availability stats and performance software network and system monitoring for hp NonStop servers a product description from hp features at a glance Online monitoring of object status and performance

### Physical I/O. CPU Utilization

Tuning Guidelines Database Tuning If a performance problem is to occur anywhere in an n-tier application, it is most likely to occur in the database. Evaluation of the database is not really that difficult

### Delivering Quality in Software Performance and Scalability Testing

Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,

### Web Application s Performance Testing

Web Application s Performance Testing B. Election Reddy (07305054) Guided by N. L. Sarda April 13, 2008 1 Contents 1 Introduction 4 2 Objectives 4 3 Performance Indicators 5 4 Types of Performance Testing

### Making Sense of Broadband Performance Solving Last Mile Connection Speed Problems Traffic Congestion vs. Traffic Control

Making Sense of Broadband Performance Solving Last Mile Connection Speed Problems Traffic Congestion vs. Traffic Control When you experience a slow or inconsistent Internet connection it is often difficult

### Proactive Performance Monitoring Using Metric Extensions and SPA

Proactive Performance Monitoring Using Metric Extensions and SPA Mughees A. Minhas Oracle Redwood Shores, CA, USA Keywords: Oracle, database, performance, proactive, fix, monitor, Enterprise manager, EM,

### Users are Complaining that the System is Slow What Should I Do Now? Part 1

Users are Complaining that the System is Slow What Should I Do Now? Part 1 Jeffry A. Schwartz July 15, 2014 SQLRx Seminar jeffrys@isi85.com Overview Most of you have had to deal with vague user complaints

### Proactive Performance Management for Enterprise Databases

Proactive Performance Management for Enterprise Databases Abstract DBAs today need to do more than react to performance issues; they must be proactive in their database management activities. Proactive

### What Is Specific in Load Testing?

What Is Specific in Load Testing? Testing of multi-user applications under realistic and stress loads is really the only way to ensure appropriate performance and reliability in production. Load testing

### Performance Management in a Virtual Environment. Eric Siebert Author and vexpert. whitepaper

Performance Management in a Virtual Environment Eric Siebert Author and vexpert Performance Management in a Virtual Environment Synopsis Performance is defined as the manner in which or the efficiency

### How to Plan a Successful Load Testing Programme for today s websites

How to Plan a Successful Load Testing Programme for today s websites This guide introduces best practise for load testing to overcome the complexities of today s rich, dynamic websites. It includes 10

### EQUELLA Whitepaper. Performance Testing. Carl Hoffmann Senior Technical Consultant

EQUELLA Whitepaper Performance Testing Carl Hoffmann Senior Technical Consultant Contents 1 EQUELLA Performance Testing 3 1.1 Introduction 3 1.2 Overview of performance testing 3 2 Why do performance testing?

### Case Study I: A Database Service

Case Study I: A Database Service Prof. Daniel A. Menascé Department of Computer Science George Mason University www.cs.gmu.edu/faculty/menasce.html 1 Copyright Notice Most of the figures in this set of

### Using Application Response to Monitor Microsoft Outlook

Focus on Value Using Application Response to Monitor Microsoft Outlook Microsoft Outlook is one of the primary e-mail applications used today. If your business depends on reliable and prompt e-mail service,

### Load Testing and Monitoring Web Applications in a Windows Environment

OpenDemand Systems, Inc. Load Testing and Monitoring Web Applications in a Windows Environment Introduction An often overlooked step in the development and deployment of Web applications on the Windows

### VMWARE WHITE PAPER 1

1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the

### PERFORMANCE TUNING ORACLE RAC ON LINUX

PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database

### Transaction Performance Maximizer InterMax

Transaction Performance Maximizer InterMax A-1208 Woorim Business Center, YeomChang-Dong, GangSeo-Gu, Seoul Korea Republic. TEL 82.2.6230.6300 l FAX 80.2.6203.6301 l www.ex-em.com Transaction Performance

### Rapid Bottleneck Identification

Rapid Bottleneck Identification TM A Better Way to Load Test WHITEPAPER You re getting ready to launch or upgrade a critical Web application. Quality is crucial, but time is short. How can you make the

### The Complete Performance Solution for Microsoft SQL Server

The Complete Performance Solution for Microsoft SQL Server Powerful SSAS Performance Dashboard Innovative Workload and Bottleneck Profiling Capture of all Heavy MDX, XMLA and DMX Aggregation, Partition,

### Case Study - I. Industry: Social Networking Website Technology : J2EE AJAX, Spring, MySQL, Weblogic, Windows Server 2008.

Case Study - I Industry: Social Networking Website Technology : J2EE AJAX, Spring, MySQL, Weblogic, Windows Server 2008 Challenges The scalability of the database servers to execute batch processes under

### Managing User Website Experience: Comparing Synthetic and Real Monitoring of Website Errors By John Bartlett and Peter Sevcik January 2006

Managing User Website Experience: Comparing Synthetic and Real Monitoring of Website Errors By John Bartlett and Peter Sevcik January 2006 The modern enterprise relies on its web sites to provide information

### WAIT-TIME ANALYSIS METHOD: NEW BEST PRACTICE FOR APPLICATION PERFORMANCE MANAGEMENT

WAIT-TIME ANALYSIS METHOD: NEW BEST PRACTICE FOR APPLICATION PERFORMANCE MANAGEMENT INTRODUCTION TO WAIT-TIME METHODS Until very recently, tuning of IT application performance has been largely a guessing

### Enterprise Application Performance Management: An End-to-End Perspective

SETLabs Briefings VOL 4 NO 2 Oct - Dec 2006 Enterprise Application Performance Management: An End-to-End Perspective By Vishy Narayan With rapidly evolving technology, continued improvements in performance

### BMC ProactiveNet Performance Management Application Diagnostics

BMC ProactiveNet Performance Management Application Diagnostics BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and

### One of the database administrators

THE ESSENTIAL GUIDE TO Database Monitoring By Michael Otey SPONSORED BY One of the database administrators (DBAs) most important jobs is to keep the database running smoothly, which includes quickly troubleshooting

### Enterprise Applications in the Cloud: Non-virtualized Deployment

Enterprise Applications in the Cloud: Non-virtualized Deployment Leonid Grinshpan, Oracle Corporation (www.oracle.com) Subject The cloud is a platform devised to support a number of concurrently working

### IS YOUR DATA WAREHOUSE SUCCESSFUL? Developing a Data Warehouse Process that responds to the needs of the Enterprise.

IS YOUR DATA WAREHOUSE SUCCESSFUL? Developing a Data Warehouse Process that responds to the needs of the Enterprise. Peter R. Welbrock Smith-Hanley Consulting Group Philadelphia, PA ABSTRACT Developing

### Case Study: Load Testing and Tuning to Improve SharePoint Website Performance

Case Study: Load Testing and Tuning to Improve SharePoint Website Performance Abstract: Initial load tests revealed that the capacity of a customized Microsoft Office SharePoint Server (MOSS) website cluster

Copyright www.agileload.com 1 INTRODUCTION Performance testing is a complex activity where dozens of factors contribute to its success and effective usage of all those factors is necessary to get the accurate

### PLA 7 WAYS TO USE LOG DATA FOR PROACTIVE PERFORMANCE MONITORING. [ WhitePaper ]

[ WhitePaper ] PLA 7 WAYS TO USE LOG DATA FOR PROACTIVE PERFORMANCE MONITORING. Over the past decade, the value of log data for monitoring and diagnosing complex networks has become increasingly obvious.

### Understanding Linux on z/vm Steal Time

Understanding Linux on z/vm Steal Time June 2014 Rob van der Heij rvdheij@velocitysoftware.com Summary Ever since Linux distributions started to report steal time in various tools, it has been causing

Using WebLOAD to Monitor Your Production Environment Your pre launch performance test scripts can be reused for post launch monitoring to verify application performance. This reuse can save time, money

### The Top 20 VMware Performance Metrics You Should Care About

The Top 20 VMware Performance Metrics You Should Care About Why you can t ignore them and how they can help you find and avoid problems. WHITEPAPER BY ALEX ROSEMBLAT Table of Contents Introduction... 3

### PARALLEL PROCESSING AND THE DATA WAREHOUSE

PARALLEL PROCESSING AND THE DATA WAREHOUSE BY W. H. Inmon One of the essences of the data warehouse environment is the accumulation of and the management of large amounts of data. Indeed, it is said that

### IBM Tivoli Monitoring Version 6.3 Fix Pack 2. Infrastructure Management Dashboards for Servers Reference

IBM Tivoli Monitoring Version 6.3 Fix Pack 2 Infrastructure Management Dashboards for Servers Reference IBM Tivoli Monitoring Version 6.3 Fix Pack 2 Infrastructure Management Dashboards for Servers Reference

### TPC-W * : Benchmarking An Ecommerce Solution By Wayne D. Smith, Intel Corporation Revision 1.2

TPC-W * : Benchmarking An Ecommerce Solution By Wayne D. Smith, Intel Corporation Revision 1.2 1 INTRODUCTION How does one determine server performance and price/performance for an Internet commerce, Ecommerce,

### Oracle Database Performance Management Best Practices Workshop. AIOUG Product Management Team Database Manageability

Oracle Database Performance Management Best Practices Workshop AIOUG Product Management Team Database Manageability Table of Contents Oracle DB Performance Management... 3 A. Configure SPA Quick Check...6

### CA Database Performance

DATA SHEET CA Database Performance CA Database Performance helps you proactively manage and alert on database performance across the IT infrastructure, regardless of whether the database is located within

### SuperAgent and Siebel

SuperAgent and Siebel Executive summary Siebel Systems provides a comprehensive family of multichannel ebusiness applications services, all within a single architecture. The Siebel architecture is an n-tier

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad OpenDemand Systems, Inc. Abstract / Executive Summary As Web applications and services become more complex, it becomes increasingly difficult

### Enhancing SQL Server Performance

Enhancing SQL Server Performance Bradley Ball, Jason Strate and Roger Wolter In the ever-evolving data world, improving database performance is a constant challenge for administrators. End user satisfaction

### Solution Offering. Infosys RADIEN CASPER. Capacity Assessment and Performance Engineering Framework (CASPER)

Solution Offering Infosys RADIEN CASPER Capacity Assessment and Engineering Framework (CASPER) Enterprises recognize the importance of performance as a key contributor to the success of any application,

### pc resource monitoring and performance advisor

pc resource monitoring and performance advisor application note www.hp.com/go/desktops Overview HP Toptools is a modular web-based device management tool that provides dynamic information about HP hardware

### Wait-Time Analysis Method: New Best Practice for Performance Management

WHITE PAPER Wait-Time Analysis Method: New Best Practice for Performance Management September 2006 Confio Software www.confio.com +1-303-938-8282 SUMMARY: Wait-Time analysis allows IT to ALWAYS find the

### technical brief Optimizing Performance in HP Web Jetadmin Web Jetadmin Overview Performance HP Web Jetadmin CPU Utilization utilization.

technical brief in HP Overview HP is a Web-based software application designed to install, configure, manage and troubleshoot network-connected devices. It includes a Web service, which allows multiple

### Oracle Database 10g. Page # The Self-Managing Database. Agenda. Benoit Dageville Oracle Corporation benoit.dageville@oracle.com

Oracle Database 10g The Self-Managing Database Benoit Dageville Oracle Corporation benoit.dageville@oracle.com Agenda Oracle10g: Oracle s first generation of self-managing database Oracle s Approach to

Performance Workload Design The goal of this paper is to show the basic principles involved in designing a workload for performance and scalability testing. We will understand how to achieve these principles

### Whitepaper. A Guide to Ensuring Perfect VoIP Calls. www.sevone.com blog.sevone.com info@sevone.com

A Guide to Ensuring Perfect VoIP Calls VoIP service must equal that of landlines in order to be acceptable to both hosts and consumers. The variables that affect VoIP service are numerous and include:

### Enterprise Applications in the Cloud: Virtualized Deployment

Enterprise Applications in the Cloud: Virtualized Deployment Leonid Grinshpan, Oracle Corporation (www.oracle.com) Subject Enterprise applications (EA) can be deployed in the Cloud in two ways: 1. Non-virtualized

### VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5

Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.

### ORACLE SERVICE CLOUD GUIDE: HOW TO IMPROVE REPORTING PERFORMANCE

ORACLE SERVICE CLOUD GUIDE: HOW TO IMPROVE REPORTING PERFORMANCE Best Practices to Scale Oracle Service Cloud Analytics for High Performance ORACLE WHITE PAPER MARCH 2015 Table of Contents Target Audience

### PERFORMANCE TUNING FOR PEOPLESOFT APPLICATIONS

PERFORMANCE TUNING FOR PEOPLESOFT APPLICATIONS 1.Introduction: It is a widely known fact that 80% of performance problems are a direct result of the to poor performance, such as server configuration, resource

### Monitoring Best Practices for COMMERCE

Monitoring Best Practices for COMMERCE OVERVIEW Providing the right level and depth of monitoring is key to ensuring the effective operation of IT systems. This is especially true for ecommerce systems

### Performance Prediction, Sizing and Capacity Planning for Distributed E-Commerce Applications

Performance Prediction, Sizing and Capacity Planning for Distributed E-Commerce Applications by Samuel D. Kounev (skounev@ito.tu-darmstadt.de) Information Technology Transfer Office Abstract Modern e-commerce

### PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions Slide 1 Outline Principles for performance oriented design Performance testing Performance tuning General

### Toad for Oracle 8.6 SQL Tuning

Quick User Guide for Toad for Oracle 8.6 SQL Tuning SQL Tuning Version 6.1.1 SQL Tuning definitively solves SQL bottlenecks through a unique methodology that scans code, without executing programs, to

### Performance Testing. Slow data transfer rate may be inherent in hardware but can also result from software-related problems, such as:

Performance Testing Definition: Performance Testing Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve

### DB Audit Expert 3.1. Performance Auditing Add-on Version 1.1 for Microsoft SQL Server 2000 & 2005

DB Audit Expert 3.1 Performance Auditing Add-on Version 1.1 for Microsoft SQL Server 2000 & 2005 Supported database systems: Microsoft SQL Server 2000 Microsoft SQL Server 2005 Copyright SoftTree Technologies,

### Oracle Enterprise Manager 13c Cloud Control

Oracle Enterprise Manager 13c Cloud Control ORACLE DIAGNOSTICS PACK FOR ORACLE DATABASE lace holder for now] Oracle Enterprise Manager is Oracle s integrated enterprise IT management product line, and

### Monitoring IBM HMC Server. eg Enterprise v6

Monitoring IBM HMC Server eg Enterprise v6 Restricted Rights Legend The information contained in this document is confidential and subject to change without notice. No part of this document may be reproduced

### SAN Conceptual and Design Basics

TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

### Programa de Actualización Profesional ACTI Oracle Database 11g: SQL Tuning Workshop

Programa de Actualización Profesional ACTI Oracle Database 11g: SQL Tuning Workshop What you will learn This Oracle Database 11g SQL Tuning Workshop training is a DBA-centric course that teaches you how

### Maximizing Performance for Oracle Database 12c using Oracle Enterprise Manager

Maximizing Performance for Oracle Database 12c using Oracle Enterprise Manager Björn Bolltoft Principal Product Manager Database manageability Table of Contents Database Performance Management... 3 A.

### Performance And Scalability In Oracle9i And SQL Server 2000

Performance And Scalability In Oracle9i And SQL Server 2000 Presented By : Phathisile Sibanda Supervisor : John Ebden 1 Presentation Overview Project Objectives Motivation -Why performance & Scalability

### Winning the J2EE Performance Game Presented to: JAVA User Group-Minnesota

Winning the J2EE Performance Game Presented to: JAVA User Group-Minnesota Michelle Pregler Ball Emerging Markets Account Executive Shahrukh Niazi Sr.System Consultant Java Solutions Quest Background Agenda

### Oracle Database 12c: Performance Management and Tuning NEW

Oracle University Contact Us: 1.800.529.0165 Oracle Database 12c: Performance Management and Tuning NEW Duration: 5 Days What you will learn In the Oracle Database 12c: Performance Management and Tuning

### Accelerate Testing Cycles With Collaborative Performance Testing

Accelerate Testing Cycles With Collaborative Performance Testing Sachin Dhamdhere 2005 Empirix, Inc. Agenda Introduction Tools Don t Collaborate Typical vs. Collaborative Test Execution Some Collaborative

### IBM Tivoli Composite Application Manager for WebSphere

Meet the challenges of managing composite applications IBM Tivoli Composite Application Manager for WebSphere Highlights Simplify management throughout the life cycle of complex IBM WebSphere-based J2EE

### APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented