11.1 inspectit inspectit
|
|
|
- Dorcas Parker
- 10 years ago
- Views:
Transcription
1 11.1. inspectit Figure Overview on the inspectit components [Siegl and Bouillet 2011] 11.1 inspectit The inspectit monitoring tool (website: has been developed by NovaTec. 1 Although, the tool is not open source, it is provided free of charge. A short introduction to the tool is provided in a white paper by Siegl and Bouillet [2011]. Further details are provided by the official documentation. 2 InspectIT is an Application Performance Management (APM) tool for application-level monitoring of Java business applications (see Figure 4.2). Thus, it provides probes, called sensors, for application-level tracing and timing of method calls, as well as probes gathering information from the used libraries, middleware containers, or JVMs. Generally speaking, it is a typical representative of the class of application-level monitoring frameworks and a typical System Under Test (SUT) for our MooBench micro-benchmark
2 11. Comparing Monitoring Frameworks On a high abstraction level, inspectit consists of three components: (1) the inspectit Agent, handling the monitoring of the target applications, (2) the CMR repository, receiving all monitoring data and performing analyses of this data, and (3) the Eclipse-based graphical user interface, displaying the data and the results of the CMR. All three components and their interconnection are displayed in Figure When benchmarking the monitoring overhead of the inspectit monitoring framework, the inspectit Agent is the focus of our interest. Compared to the monitoring component of the Kieker monitoring framework, this inspectit Agent is less configurable. For instance, it is not possible to instrument a target method without collecting any monitoring data, e. g., by deactivating the probe. The second inspectit component employed in most of our benchmark experiments is the CMR repository, since a running CMR is required by the monitoring agent. The monitoring agent transmits all gathered monitoring data to this repository using a TCP connection. Due to this TCP connection, the CMR can either be deployed on the same machine as the inspectit Agent or on a different remote machine. Depending on the selected sensors, analyses of the gathered data are performed and visualizations for the user interface are prepared within the CMR. Furthermore, it is possible to activate a storage function to collect all recorded monitoring data on the hard disk. The third component of inspectit, the graphical user interface, is not directly relevant for our intended benchmark scenarios, as it is solely involved with displaying the gathered data. Furthermore, due to its graphical nature and its realization as an Eclipse-based tool, its execution is hard to automatize for our command-line-based External Controller of our microbenchmark. So, although its use might put additional load on the CMR and thus indirectly on the monitoring agent, it is outside of the scope of the presented experiments Specialties and Required Changes for MooBench In order to gather more detailed information on our three causes of monitoring overhead (see Section 7.1) despite the limited configuration possibilities of the inspectit Agent, we make minimal alterations to the provided class 148
3 11.1. inspectit files of inspectit. That is, we add the possibility to deactivate two parts of the employed sensors: on first the whole data collection and second the data sending. These changes are implemented with a simple if-clause checking a binary variable set in the respective sensor s constructor. We aim to minimize the perturbation caused by our changes with this implementation. Furthermore, we encounter additional challenges with high workloads: InspectIT employs a hash function when collecting its monitoring records. Due to the present implementation, it is only capable of storing one monitoring record per millisecond per monitored method. Thus, the MooBench approach of rapidly calling the same method in a loop is not easily manageable. In order to benchmark higher workloads, we also change this behavior to one monitoring record per nanosecond per monitored method. At least with our employed hardware and software environment, this small change proves to be sufficient. Otherwise, the hash function could easily be adapted to use a unique counter instead of a timestamp. It is important to note, that inspectit focusses on low overhead instead of monitoring data integrity. That is, in cases of high load, the framework drops monitoring records to reduce the load. This behavior can to some extend be countered by providing larger queues and memory buffers to the agent and CMR. For instance, instead of using the default SimpleBufferStrategy, the configurable SizeBufferStrategy strategy can be employed. However, it is not possible to entirely prevent this behavior in a reliable way. Even with sufficiently large buffers, loss of monitoring data occasionally occurs. Additionally, the sending behavior can be configured: With the default TimeStrategy, monitoring data is collected and sent in regular intervals. Contrary to that, the ListSizeStrategy collects a specified amount of data and then sends it. However, in our experiments, the TimeStrategy provides superior performance under high load. The ListSizeStrategy either sends many small messages (small list size) or sending each message takes a much longer time compared to similar messages sizes of the TimeStrategy (large list sizes). Finally, we encounter challenges with threaded benchmarks. On our test systems, with more than six active benchmark worker threads providing stress workloads, inspectit crashes due to excessive garbage collection activity. 149
4 11. Comparing Monitoring Frameworks General Benchmark Parameters In the following, we describe a series of benchmark experiments we perform to evaluate the feasibility of MooBench to determine the monitoring overhead of the application-level monitoring framework inspectit. All of these experiments utilize, except as noted below, the same sets of environment, workload, and system parameters. Specifically, we conduct our experiments with the Oracle Java 64-bit Server VM in version 1.7.0_45 with up to 12 GiB of heap space provided to the JVM. Furthermore, we utilize X6270 Blade Servers with two Intel Xeon 2.53 GHz E5540 Quadcore processors and 24 GiB RAM running Solaris 10. This hardware and software system is used exclusively for the experiments and otherwise held idle. The instrumentation of the monitored application is performed through load-time weaving using the (modified) inspectit Agent. The configuration of the MooBench parameters is, except as noted, left at default values, as described in Listing 7.2 on page 120. That is, we use 2,000,000 calls of the monitored method with a recursion depth of ten. Our experiments suggest discarding the first 1,500,000 monitored executions as warm-up period for executions of inspectit (see Figure 11.2). Each part of each experiment is repeated ten times. The configured method time depends on the intended experiment and is either configured to 500 µs or to 0 µs for ten recursive calls. InspectIt is configured with all default sensors active, including the platform sensors collecting data concerning the CPU or memory usage. Depending on the benchmark scenario, the isequence sensor and/or the timer sensor are employed with a targeted instrumentation of the monitoredmethod(). Furthermore, the TimeStrategy is used to send all monitoring data every second. To provide additional buffers, the SizeBufferStrategy is configured to a size of 1,000,000. Otherwise, the default configuration of inspectit is employed. An exemplary time series diagram for initial experiments with inspectit is presented in Figure Our suggested warm-up period of 1,500,000 monitored executions for a full-blown instrumentation experiment is evident in this graph. Especially with the full-blown instrumentation experiment, massive garbage collector activity is visible. Besides the visible 150
5 11.1. inspectit Figure Time series diagram for the initial experiment with inspectit spikes in the presented graph, this is also documented in the logs of each benchmark run. For instance, during some benchmark executions the JVM blocks with garbage collection activity for up to 0.5 s per 100,000 monitored executions. Similarly, our observations of the environment demonstrate between two and ten active and fully-loaded threads during these experiments. Finally, our observation of JIT compiler activity also suggest the warm-up period of 1,500,000 monitored executions. Although most compilations in our benchmark runs occur earlier, especially Java functions associated with sending the collected data are often compiled late in the runs. Our configurations of the inspectit Agent and the CMR as well as of the accompanying loggers are provided with MooBench. 3 Additionally, we provide the preconfigured External Controllers used to run our benchmarking experiments. However, due to licensing constraint, we are unable to provide the modified versions of the class files. In order to configure MooBench for inspectit, the ant build target build-inspectit can be used. This way, a ready to use benchmark environment is provided
6 11. Comparing Monitoring Frameworks Table Response time for the initial inspectit experiment (in µs) No instr. Platform Full-blown No CMR Mean % CI Q Median Q Min Max Initial Performance Benchmark In this section, we present the results of our first overhead evaluation of the inspectit monitoring tool. Our goal is a direct evaluation of inspectit, without any of our modifications for more detailed studies (see Section ). However, in order to measure the overhead of monitoring all method executions, we have to tune the employed hash function for these experiments as well. Our evaluation consists of four experiment runs: 1. First, we measure the uninstrumented benchmark system to establish a baseline for the response time of executing the monitoredmethod(). 2. Next, we instrument the benchmark system with inspectit using only platform sensors. That is, the monitoredmethod() is not instrumented. However, infrastructure data is collected from the JVM and sent to the CMR. This experiment run established the basic costs of instrumenting a system with inspectit. 3. In the third experiment run, a full-blown instrumentation is employed. That is, in addition to the platform sensors, the monitoredmethod() is instrumented with isequence and timer probes, that collect tracing and time information from each execution. All collected data is in turn sent to the CMR. 4. We again employ a full-blown instrumentation. However, we do not provide an available CMR. 152
7 11.1. inspectit Figure Median response time for the initial experiment with inspectit (including the 25% and 75% quartiles) Experimental Results & Discussion The results of our evaluation are presented in Table Additionally, we visualize the median response times in Figure Finally, we already presented a time series diagram for this experiment in Figure For the uninstrumented benchmark system, we measure an average response time of µs. Adding inspectit with platform instrumentation minimally increases this average response to µs. However, due to the added activity, the measured maximal execution was µs. Such high values are rare, as is evident by the quartiles and by the mean being very similar to the median. Activating a full-blown instrumentation, further increases the measured average response time to µs. The difference to the median, as well as the quartiles, hint at a skewness in the measured distribution. Of special note is the measured maximum of more than 300 ms. By deactivating the CMR the average response time further increases to µs, albeit with less skewness. The full-blown instrumentation of our benchmark results in a rather high overhead of about 17 µs per monitored method execution (we utilize a stack depth of ten). Furthermore, the memory usage and garbage collection activity are very high. Our original intend of deactivating the CMR was to deactivate the sending of monitoring data from the inspectit Agent to the CMR. However, as is evident in our results, deactivating the CMR increases the monitoring overhead instead of reducing it. This is probably caused by timeouts in the sending mechanism. In summary, we are able to determine the monitoring overhead of a full-blown instrumentation of inspectit compared to an uninstrumented 153
8 11. Comparing Monitoring Frameworks benchmark run. However, we are not able to further narrow down the actual causes of monitoring overhead with this experiment. In the following experiments, we add our already discussed modifications to inspectit in order to perform finer-grained experiments Comparing the isequence and timer Probes In our second overhead evaluation of inspectit, we target two new aspects: First, we evaluate our three causes of monitoring overhead in the context of inspectit (see Section 7.1). Second, we compare the provided isequence and timer probes. The isequence probe provides trace information, similar to the common Kieker probes. The timer probe, on the other hand, provides more detailed timings of each monitored execution, e. g., actual processor time spent, rather than trace information. If both probes are active, as in the previous experiment, these data points are combined in the CMR. Our experiment runs correspond to the default series of monitoring overhead evaluations with MooBench: 1. In the first run, only the execution time of the chain of recursive calls to the monitoredmethod() is determined (T). 2. In the second run, the monitoredmethod() is instrumented with either a isequence or a timer probe, that is deactivated for the monitoredmethod(). Thus, the duration T + I is measured. 3. The third run adds the respective data collection for the chosen probe without sending any collected data (T + I + C). 4. The fourth run finally represents the measurement of full-blown monitoring with the addition of an active Monitoring Writer sending the collected data to the CMR (T + I + C + W). An active CMR is present in all except the first run. Similarly, platform sensors collect data in all three runs containing inspectit probes. 154
9 11.1. inspectit Table Response time for the isequence probe (in µs) No instr. Deactiv. Collecting Writing Mean % CI Q Median Q Min Max Table Response time for the timer probe (in µs) No instr. Deactiv. Collecting Writing Mean % CI Q Median Q Min Max Experimental Results & Discussion The results of our benchmark experiments are presented in Tables 11.2 and Additionally, we provide a direct comparison of the median monitoring overhead of the two different probes in Figure For our uninstrumented benchmark, we measure an average response time of µs (T), as expected for the configured µs. Adding a deactivated isequence probe, increases the average response time to µs (T + I). Similarly, a deactivated timer probe increases the average response time to µs (T + I). As expected, both values are very similar and the slightly higher overhead of the timer probe can be explained by differences in the employed instrumentation of the two probes. 155
10 11. Comparing Monitoring Frameworks Figure Comparison of the median response times (including the 25% and 75% quartiles) of the isequence (upper bar) and the timer probe (lower bar)) Adding data collection, the behavior of both probes starts to differ (T + I + C). The isequence probe further increases the average response time to to µs. However, timer probe raises the measured response time to µs. In both cases, the results are very stable, as is evident by the confidence intervals and the quartiles. Furthermore, the timer probe seems to be the main cause of monitoring overhead in a full-blown instrumentation. Finally, we add sending of data (T + I + C + W). Again, the isequence probe causes fewer additional overhead than the timer probe. The former raises the response time to an average total of µs, while the latter reaches µs. This can be explained by the amount of monitoring data sent. In the case of the isequence probe, a monitoring record is sent with each trace. In the case of the timer probe, a record is sent with each method execution. This is further evident by the higher difference between measured median and mean values with the timer probe, as well as with the high maximal response time (comparable overhead to the full-blown instrumentation of the previous experiment). Overall, the isequence probe causes less overhead and more stable results. Furthermore, the monitoring overhead of the two probes alone cannot simply be added to calculate the monitoring overhead of the combination of both probes. Especially, the writing part (sending of monitoring records to the CMR) causes higher overhead under the combined load. However, any performance tuning of inspectit should focus on the collection of monitoring data. 156
11 11.1. inspectit Table Response time for the isequence probe under high load (in µs) No instr. Deactiv. Collecting Writing Mean % CI Q Median Q Min Max Figure Comparison of the median response time overhead of inspectit probes with workloads of 0 µs and 500 µs (including the 25% and 75% quartiles) Comparing Different Workload In our third evaluation of inspectit, we compare different workloads. Specifically, we compare our results from the previous evaluation, that used a methodtime of 500 µs, with a similar benchmark experiment using a configured methodtime of 0 µs. Due to its similarity with the common Kieker probes, we focus the rest of our evaluations on the isequence probe. Furthermore, high workloads in combination with the timer probe regularly result in errors due to excessive garbage collection. The results of our benchmark experiments are presented in Table Additionally, we provide a direct comparison of the median monitoring overhead of the two different workloads in Figure
12 11. Comparing Monitoring Frameworks Discussion of Results Overall, the results of our benchmark experiment with a configured method time of 0 µs are similar to our previous experiment with a configured method time 500 µs. The overhead of the deactivated probe (I) is slightly lower (from 3.4 µs to 2.4 µs). This reduction is probably caused by different JIT optimizations, e. g., inlining, due the changed runtime behavior (shorter execution path) of the monitoredmethod(). The overhead of collecting (C) and sending data (W) are slightly increased. The former increases from 23.9 µs to 25 µs, while the latter increases from 1.5 µs to 2.4 µs. This raise is caused by the higher workload of our benchmark. It is especially visible in the difference between the mean and median response times of writing (hinting in a skewness in our measurement distribution), as well as in the measured maximum response time of almost one second. Again, the experiments hints at the collection of monitoring data as a main target for future performance tunings of inspectit. A major cause of the skewness and the high maxima is garbage collector activity (with garbage collection regularly taking more than 0.5 s. Furthermore, the burst behavior (also affecting the high maximal response time) of sending data could be improved. However, this could also be caused by our employed buffering and sending strategy. By adjusting these parameters in additional experiments, such performance tunings could be guided. However, these experiments as well as the corresponding tunings are outside the scope of our evaluation of MooBench with inspectit Comparing Local CMR to Remote CMR In our next evaluation of inspectit, we compare a locally deployed CMR to a CMR deployed on a remote server. The remote server is an identically configured machine to our primary server running the benchmark. It is connected with a common 1 GBit/s local area network to the primary server. Furthermore, we employ the configured methodtime of 0 µs from our last experiment and an isequence probe. Thus, we can reuse our previous measurements for the local CMR. 158
13 11.1. inspectit Table Response time for the isequence probe with a remote CMR (in µs) No instr. Deactiv. Collecting Writing Mean % CI Q Median Q Min Max Figure Comparison of the median response time overhead of inspectit probes with local and remote CMR (including the 25% and 75% quartiles) Experimental Results & Discussion The results of our benchmark experiments are presented in Table Additionally, we provide a direct comparison of the median monitoring overhead of the local CMR with the remote CMR in Figure The measured response times for the deactivated probe (I), as well as for the data collection (C) are very similar and in both cases slightly below the local CMR. This is probably caused by freing the available CPUs from several additional CMR threads. In contrast, the additional response time overhead of sending the monitoring data (W) increases from an average of 7.9 µs to 8.0 µs and from a median value of 2.4 µs to 5.2 µs. The fact, that mostly the 159
14 11. Comparing Monitoring Frameworks median is affected rather than the mean, hints at a slightly higher overhead while sending each monitoring record, while the garbage collector behavior (causing a few very high response times) is unaffected Conclusions Our performed benchmark experiments demonstrate the applicability of MooBench as benchmark to determine the monitoring overhead of inspectit. However, some adjustments to inspectit, as detailed in Section , are required. The gravest adjustment is required in the employed hash function. But this behavior can also be considered a bug and might be fixed in future versions. Otherwise, at least a baseline evaluation of the monitoring tool is feasible. With additional minor adjustments, that could also easily be included into inspectit releases or developer versions, more detailed experiments are possible to determine the actual causes of monitoring overhead. These experiments provide first hints at possible targets for future performance optimizations of inspectit. Compared to Kieker, the total monitoring overhead is higher, similar to earlier versions of Kieker before benchmark aided performance evaluations were performed (see??). This higher overhead is mainly caused by the collection of monitoring data. On the other hand, the actual sending of monitoring data is similar to or slightly faster than with Kieker. This can be caused by Kieker s focus on data integrity contrary to inspectit s focus on throughput. That is, inspectit discards gathered data to reduce overhead while Kieker (in its default configuration) either blocks or terminates under higher load. As mentioned before, our configuration of inspectit is provided with MooBench. 4 Additionally, we provide ready to run experiments in order to facilitate repeatability and verification of our results. Similarly, our raw benchmark results as well as produced log files and additional diagrams are available for download at
An Oracle White Paper September 2013. Advanced Java Diagnostics and Monitoring Without Performance Overhead
An Oracle White Paper September 2013 Advanced Java Diagnostics and Monitoring Without Performance Overhead Introduction... 1 Non-Intrusive Profiling and Diagnostics... 2 JMX Console... 2 Java Flight Recorder...
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
JBoss Data Grid Performance Study Comparing Java HotSpot to Azul Zing
JBoss Data Grid Performance Study Comparing Java HotSpot to Azul Zing January 2014 Legal Notices JBoss, Red Hat and their respective logos are trademarks or registered trademarks of Red Hat, Inc. Azul
Tuning WebSphere Application Server ND 7.0. Royal Cyber Inc.
Tuning WebSphere Application Server ND 7.0 Royal Cyber Inc. JVM related problems Application server stops responding Server crash Hung process Out of memory condition Performance degradation Check if the
JVM Performance Study Comparing Oracle HotSpot and Azul Zing Using Apache Cassandra
JVM Performance Study Comparing Oracle HotSpot and Azul Zing Using Apache Cassandra January 2014 Legal Notices Apache Cassandra, Spark and Solr and their respective logos are trademarks or registered trademarks
HPSA Agent Characterization
HPSA Agent Characterization Product HP Server Automation (SA) Functional Area Managed Server Agent Release 9.0 Page 1 HPSA Agent Characterization Quick Links High-Level Agent Characterization Summary...
An Oracle White Paper March 2013. Load Testing Best Practices for Oracle E- Business Suite using Oracle Application Testing Suite
An Oracle White Paper March 2013 Load Testing Best Practices for Oracle E- Business Suite using Oracle Application Testing Suite Executive Overview... 1 Introduction... 1 Oracle Load Testing Setup... 2
WebSphere Architect (Performance and Monitoring) 2011 IBM Corporation
Track Name: Application Infrastructure Topic : WebSphere Application Server Top 10 Performance Tuning Recommendations. Presenter Name : Vishal A Charegaonkar WebSphere Architect (Performance and Monitoring)
PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design
PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions Slide 1 Outline Principles for performance oriented design Performance testing Performance tuning General
Instrumentation Software Profiling
Instrumentation Software Profiling Software Profiling Instrumentation of a program so that data related to runtime performance (e.g execution time, memory usage) is gathered for one or more pieces of the
Characterizing Java Virtual Machine for More Efficient Processor Power Management. Abstract
Characterizing Java Virtual Machine for More Efficient Processor Power Management Marcelo S. Quijano, Lide Duan Department of Electrical and Computer Engineering University of Texas at San Antonio [email protected],
Agility Database Scalability Testing
Agility Database Scalability Testing V1.6 November 11, 2012 Prepared by on behalf of Table of Contents 1 Introduction... 4 1.1 Brief... 4 2 Scope... 5 3 Test Approach... 6 4 Test environment setup... 7
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Cloud Operating Systems for Servers
Cloud Operating Systems for Servers Mike Day Distinguished Engineer, Virtualization and Linux August 20, 2014 [email protected] 1 What Makes a Good Cloud Operating System?! Consumes Few Resources! Fast
Practical Performance Understanding the Performance of Your Application
Neil Masson IBM Java Service Technical Lead 25 th September 2012 Practical Performance Understanding the Performance of Your Application 1 WebSphere User Group: Practical Performance Understand the Performance
VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5
Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can
System Requirements Table of contents
Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5
Zing Vision. Answering your toughest production Java performance questions
Zing Vision Answering your toughest production Java performance questions Outline What is Zing Vision? Where does Zing Vision fit in your Java environment? Key features How it works Using ZVRobot Q & A
A Comparison of Oracle Performance on Physical and VMware Servers
A Comparison of Oracle Performance on Physical and VMware Servers By Confio Software Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 303-938-8282 www.confio.com Comparison of Physical and
NetBeans Profiler is an
NetBeans Profiler Exploring the NetBeans Profiler From Installation to a Practical Profiling Example* Gregg Sporar* NetBeans Profiler is an optional feature of the NetBeans IDE. It is a powerful tool that
An Oracle White Paper June 2012. High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database
An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct
Hands-On Microsoft Windows Server 2008
Hands-On Microsoft Windows Server 2008 Chapter 9 Server and Network Monitoring Objectives Understand the importance of server monitoring Monitor server services and solve problems with services Use Task
Identifying Performance Bottleneck using JRockit. - Shivaram Thirunavukkarasu Performance Engineer Wipro Technologies
Identifying Performance Bottleneck using JRockit - Shivaram Thirunavukkarasu Performance Engineer Wipro Technologies Table of Contents About JRockit Mission Control... 3 Five things to look for in JRMC
Tutorial: Load Testing with CLIF
Tutorial: Load Testing with CLIF Bruno Dillenseger, Orange Labs Learning the basic concepts and manipulation of the CLIF load testing platform. Focus on the Eclipse-based GUI. Menu Introduction about Load
Mission-Critical Java. An Oracle White Paper Updated October 2008
Mission-Critical Java An Oracle White Paper Updated October 2008 Mission-Critical Java The Oracle JRockit family of products is a comprehensive portfolio of Java runtime solutions that leverages the base
Informatica Master Data Management Multi Domain Hub API: Performance and Scalability Diagnostics Checklist
Informatica Master Data Management Multi Domain Hub API: Performance and Scalability Diagnostics Checklist 2012 Informatica Corporation. No part of this document may be reproduced or transmitted in any
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Performance Analysis of Web based Applications on Single and Multi Core Servers
Performance Analysis of Web based Applications on Single and Multi Core Servers Gitika Khare, Diptikant Pathy, Alpana Rajan, Alok Jain, Anil Rawat Raja Ramanna Centre for Advanced Technology Department
B M C S O F T W A R E, I N C. BASIC BEST PRACTICES. Ross Cochran Principal SW Consultant
B M C S O F T W A R E, I N C. PATROL FOR WEBSPHERE APPLICATION SERVER BASIC BEST PRACTICES Ross Cochran Principal SW Consultant PAT R O L F O R W E B S P H E R E A P P L I C AT I O N S E R V E R BEST PRACTICES
MONITORING PERFORMANCE IN WINDOWS 7
MONITORING PERFORMANCE IN WINDOWS 7 Performance Monitor In this demo we will take a look at how we can use the Performance Monitor to capture information about our machine performance. We can access Performance
Effective Java Programming. measurement as the basis
Effective Java Programming measurement as the basis Structure measurement as the basis benchmarking micro macro profiling why you should do this? profiling tools Motto "We should forget about small efficiencies,
Tomcat Tuning. Mark Thomas April 2009
Tomcat Tuning Mark Thomas April 2009 Who am I? Apache Tomcat committer Resolved 1,500+ Tomcat bugs Apache Tomcat PMC member Member of the Apache Software Foundation Member of the ASF security committee
Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server
Performance brief for IBM WebSphere Application Server.0 with VMware ESX.0 on HP ProLiant DL0 G server Table of contents Executive summary... WebSphere test configuration... Server information... WebSphere
WebSphere Performance Monitoring & Tuning For Webtop Version 5.3 on WebSphere 5.1.x
Frequently Asked Questions WebSphere Performance Monitoring & Tuning For Webtop Version 5.3 on WebSphere 5.1.x FAQ Version 1.0 External FAQ1. Q. How do I monitor Webtop performance in WebSphere? 1 Enabling
Paper 064-2014. Robert Bonham, Gregory A. Smith, SAS Institute Inc., Cary NC
Paper 064-2014 Log entries, Events, Performance Measures, and SLAs: Understanding and Managing your SAS Deployment by Leveraging the SAS Environment Manager Data Mart ABSTRACT Robert Bonham, Gregory A.
AgencyPortal v5.1 Performance Test Summary Table of Contents
AgencyPortal v5.1 Performance Test Summary Table of Contents 1. Testing Approach 2 2. Server Profiles 3 3. Software Profiles 3 4. Server Benchmark Summary 4 4.1 Account Template 4 4.1.1 Response Time 4
Performance White Paper
Sitecore Experience Platform 8.1 Performance White Paper Rev: March 11, 2016 Sitecore Experience Platform 8.1 Performance White Paper Sitecore Experience Platform 8.1 Table of contents Table of contents...
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Adobe LiveCycle Data Services 3 Performance Brief
Adobe LiveCycle ES2 Technical Guide Adobe LiveCycle Data Services 3 Performance Brief LiveCycle Data Services 3 is a scalable, high performance, J2EE based server designed to help Java enterprise developers
Holistic Performance Analysis of J2EE Applications
Holistic Performance Analysis of J2EE Applications By Madhu Tanikella In order to identify and resolve performance problems of enterprise Java Applications and reduce the time-to-market, performance analysis
An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide
Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.
How accurately do Java profilers predict runtime performance bottlenecks?
How accurately do Java profilers predict runtime performance bottlenecks? Master Software Engineering How accurately do Java profilers predict runtime performance bottlenecks? Peter Klijn: Student number
<Insert Picture Here> Introducing Oracle VM: Oracle s Virtualization Product Strategy
Introducing Oracle VM: Oracle s Virtualization Product Strategy SAFE HARBOR STATEMENT The following is intended to outline our general product direction. It is intended for information
SOLUTION BRIEF: SLCM R12.8 PERFORMANCE TEST RESULTS JANUARY, 2013. Submit and Approval Phase Results
SOLUTION BRIEF: SLCM R12.8 PERFORMANCE TEST RESULTS JANUARY, 2013 Submit and Approval Phase Results Table of Contents Executive Summary 3 Test Environment 4 Server Topology 4 CA Service Catalog Settings
Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis
White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This document
Magento & Zend Benchmarks Version 1.2, 1.3 (with & without Flat Catalogs)
Magento & Zend Benchmarks Version 1.2, 1.3 (with & without Flat Catalogs) 1. Foreword Magento is a PHP/Zend application which intensively uses the CPU. Since version 1.1.6, each new version includes some
MID-TIER DEPLOYMENT KB
MID-TIER DEPLOYMENT KB Author: BMC Software, Inc. Date: 23 Dec 2011 PAGE 1 OF 16 23/12/2011 Table of Contents 1. Overview 3 2. Sizing guidelines 3 3. Virtual Environment Notes 4 4. Physical Environment
A Comparison of Oracle Performance on Physical and VMware Servers
A Comparison of Oracle Performance on Physical and VMware Servers By Confio Software Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com Introduction Of all the tier one applications
find model parameters, to validate models, and to develop inputs for models. c 1994 Raj Jain 7.1
Monitors Monitor: A tool used to observe the activities on a system. Usage: A system programmer may use a monitor to improve software performance. Find frequently used segments of the software. A systems
<Insert Picture Here> Java Application Diagnostic Expert
Java Application Diagnostic Expert Agenda 1. Enterprise Manager 2. Challenges 3. Java Application Diagnostics Expert (JADE) 4. Feature-Benefit Summary 5. Features Overview Diagnostic
SOLUTION BRIEF: SLCM R12.7 PERFORMANCE TEST RESULTS JANUARY, 2012. Load Test Results for Submit and Approval Phases of Request Life Cycle
SOLUTION BRIEF: SLCM R12.7 PERFORMANCE TEST RESULTS JANUARY, 2012 Load Test Results for Submit and Approval Phases of Request Life Cycle Table of Contents Executive Summary 3 Test Environment 4 Server
EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications
ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani
Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010
Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010 This document is provided as-is. Information and views expressed in this document, including URL and other Internet
SQL Server Business Intelligence on HP ProLiant DL785 Server
SQL Server Business Intelligence on HP ProLiant DL785 Server By Ajay Goyal www.scalabilityexperts.com Mike Fitzner Hewlett Packard www.hp.com Recommendations presented in this document should be thoroughly
Technical Investigation of Computational Resource Interdependencies
Technical Investigation of Computational Resource Interdependencies By Lars-Eric Windhab Table of Contents 1. Introduction and Motivation... 2 2. Problem to be solved... 2 3. Discussion of design choices...
Self Adaptive Software System Monitoring for Performance Anomaly Localization
2011/06/17 Jens Ehlers, André van Hoorn, Jan Waller, Wilhelm Hasselbring Software Engineering Group Christian Albrechts University Kiel Application level Monitoring Extensive infrastructure monitoring,
Analysis of VDI Storage Performance During Bootstorm
Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process
OpenProdoc. Benchmarking the ECM OpenProdoc v 0.8. Managing more than 200.000 documents/hour in a SOHO installation. February 2013
OpenProdoc Benchmarking the ECM OpenProdoc v 0.8. Managing more than 200.000 documents/hour in a SOHO installation. February 2013 1 Index Introduction Objectives Description of OpenProdoc Test Criteria
Validating Java for Safety-Critical Applications
Validating Java for Safety-Critical Applications Jean-Marie Dautelle * Raytheon Company, Marlborough, MA, 01752 With the real-time extensions, Java can now be used for safety critical systems. It is therefore
0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way. Ashish C. Morzaria, SAP
0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way Ashish C. Morzaria, SAP LEARNING POINTS Understanding the Virtualization Tax : What is it, how it affects you How
Garbage Collection in the Java HotSpot Virtual Machine
http://www.devx.com Printed from http://www.devx.com/java/article/21977/1954 Garbage Collection in the Java HotSpot Virtual Machine Gain a better understanding of how garbage collection in the Java HotSpot
Angelika Langer www.angelikalanger.com. The Art of Garbage Collection Tuning
Angelika Langer www.angelikalanger.com The Art of Garbage Collection Tuning objective discuss garbage collection algorithms in Sun/Oracle's JVM give brief overview of GC tuning strategies GC tuning (2)
How To Use Inspectit In Java (Java) For A Performance Problem Solving
Whitepaper: The Cost-effective Performance Engineering Solution How to ensure high application performance with little expense Table of Contents Cost-efficient application performance management 3 Safeguarding
Java Performance Tuning
Summer 08 Java Performance Tuning Michael Finocchiaro This white paper presents the basics of Java Performance Tuning for large Application Servers. h t t p : / / m f i n o c c h i a r o. w o r d p r e
Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure
White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This
Oracle Solaris: Aktueller Stand und Ausblick
Oracle Solaris: Aktueller Stand und Ausblick Detlef Drewanz Principal Sales Consultant, EMEA Server Presales The following is intended to outline our general product direction. It
Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7
Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
Performance Test Report For OpenCRM. Submitted By: Softsmith Infotech.
Performance Test Report For OpenCRM Submitted By: Softsmith Infotech. About The Application: OpenCRM is a Open Source CRM software ideal for small and medium businesses for managing leads, accounts and
An Oracle Benchmarking Study February 2011. Oracle Insurance Insbridge Enterprise Rating: Performance Assessment
An Oracle Benchmarking Study February 2011 Oracle Insurance Insbridge Enterprise Rating: Performance Assessment Executive Overview... 1 RateManager Testing... 2 Test Environment... 2 Test Scenarios...
CentOS Linux 5.2 and Apache 2.2 vs. Microsoft Windows Web Server 2008 and IIS 7.0 when Serving Static and PHP Content
Advances in Networks, Computing and Communications 6 92 CentOS Linux 5.2 and Apache 2.2 vs. Microsoft Windows Web Server 2008 and IIS 7.0 when Serving Static and PHP Content Abstract D.J.Moore and P.S.Dowland
Virtual desktops made easy
Product test: DataCore Virtual Desktop Server 2.0 Virtual desktops made easy Dr. Götz Güttich The Virtual Desktop Server 2.0 allows administrators to launch and maintain virtual desktops with relatively
APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM
152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented
Comparison of Windows IaaS Environments
Comparison of Windows IaaS Environments Comparison of Amazon Web Services, Expedient, Microsoft, and Rackspace Public Clouds January 5, 215 TABLE OF CONTENTS Executive Summary 2 vcpu Performance Summary
Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U
Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U Datasheet Brings the performance and reliability of mainframe virtualization to blade computing BladeSymphony is the first true enterprise-class
Enterprise Manager Performance Tips
Enterprise Manager Performance Tips + The tips below are related to common situations customers experience when their Enterprise Manager(s) are not performing consistent with performance goals. If you
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Network (RHN) Satellite server is an easy-to-use, advanced systems management platform
Cognos8 Deployment Best Practices for Performance/Scalability. Barnaby Cole Practice Lead, Technical Services
Cognos8 Deployment Best Practices for Performance/Scalability Barnaby Cole Practice Lead, Technical Services Agenda > Cognos 8 Architecture Overview > Cognos 8 Components > Load Balancing > Deployment
Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition
Liferay Portal Performance Benchmark Study of Liferay Portal Enterprise Edition Table of Contents Executive Summary... 3 Test Scenarios... 4 Benchmark Configuration and Methodology... 5 Environment Configuration...
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
Application. Performance Testing
Application Performance Testing www.mohandespishegan.com شرکت مهندش پیشگان آزمون افسار یاش Performance Testing March 2015 1 TOC Software performance engineering Performance testing terminology Performance
Performance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
ORACLE VM MANAGEMENT PACK
ORACLE VM MANAGEMENT PACK Effective use of virtualization promises to deliver significant cost savings and operational efficiencies. However, it does pose some management challenges that need to be addressed
MCTS Guide to Microsoft Windows 7. Chapter 10 Performance Tuning
MCTS Guide to Microsoft Windows 7 Chapter 10 Performance Tuning Objectives Identify several key performance enhancements Describe performance tuning concepts Use Performance Monitor Use Task Manager Understand
<Insert Picture Here> An Experimental Model to Analyze OpenMP Applications for System Utilization
An Experimental Model to Analyze OpenMP Applications for System Utilization Mark Woodyard Principal Software Engineer 1 The following is an overview of a research project. It is intended
Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0
Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without
XTM Web 2.0 Enterprise Architecture Hardware Implementation Guidelines. A.Zydroń 18 April 2009. Page 1 of 12
XTM Web 2.0 Enterprise Architecture Hardware Implementation Guidelines A.Zydroń 18 April 2009 Page 1 of 12 1. Introduction...3 2. XTM Database...4 3. JVM and Tomcat considerations...5 4. XTM Engine...5
D5.3.2b Automatic Rigorous Testing Components
ICT Seventh Framework Programme (ICT FP7) Grant Agreement No: 318497 Data Intensive Techniques to Boost the Real Time Performance of Global Agricultural Data Infrastructures D5.3.2b Automatic Rigorous
Initial Hardware Estimation Guidelines. AgilePoint BPMS v5.0 SP1
Initial Hardware Estimation Guidelines Document Revision r5.2.3 November 2011 Contents 2 Contents Preface...3 Disclaimer of Warranty...3 Copyright...3 Trademarks...3 Government Rights Legend...3 Virus-free
Oracle WebLogic Thread Pool Tuning
Oracle WebLogic Thread Pool Tuning AN ACTIVE ENDPOINTS TECHNICAL NOTE 2010 Active Endpoints Inc. ActiveVOS is a trademark of Active Endpoints, Inc. All other company and product names are the property
Oracle TimesTen In-Memory Database on Oracle Exalogic Elastic Cloud
An Oracle White Paper July 2011 Oracle TimesTen In-Memory Database on Oracle Exalogic Elastic Cloud Executive Summary... 3 Introduction... 4 Hardware and Software Overview... 5 Compute Node... 5 Storage
