BEAWebLogic. Integration Capacity Planning and Performance Tuning
|
|
|
- Rafe Griffin
- 10 years ago
- Views:
Transcription
1 BEAWebLogic Integration 10.2 Capacity Planning and Performance Tuning Version 10.2 Document Revised: April, 2008
2
3 Contents 1. Introduction Target Audience About this Document What Is Capacity Planning? Purpose of Capacity Planning Related Information Capacity Planning Process Design the WLI Application Tune the Environment Prepare the Application for Performance Testing Design the Workload Define the Unit of Work and SLA Design the Load Generation Script Configure the Test Environment Run Benchmark Tests Validating the Results Using Little s Law Interpreting the Results Run Scalability Tests Horizontal and Vertical Scaling Conducting Scalability Tests Estimate Resource Requirement Capacity Planning Guide iii
4 A. Capacity Planning Example Workload and Load Generation Script A-3 Unit of Work and SLA A-3 Test Setup A-3 Horizontal Scalability A-4 Vertical Scalability A-4 Hardware Configuration A-5 Software Configuration: Horizontal Scalability A-5 Software Configuration: Vertical Scalability A-6 Database Configuration A-6 Scalability Test Results A-9 Capacity Estimation A-10 B. WLI Tuning BPM Logging Levels B-2 JDBC Data Source Tuning B-2 AsyncDispatcher MDB Tuning B-3 HTTP Control B-3 Internal Queues B-4 JMS EG Tuning B-4 Document Size B-4 Process Tracking B-5 B2B Process Tracking B-5 Worklist Reporting and Querying B-6 JMS Control Tuning B-6 Test Console B-6 JPD Meta Data Cache Tuning B-6 iv Capacity Planning Guide
5 CHAPTER 1 Introduction This guide describes capacity planning, which is the process of determining the hardware and software resources required for running WebLogic Integration (WLI) applications. It also provides information about tuning the environment to maintain and improve performance of WLI applications. Note: The tests described in this guide were conducted in a controlled environment; the numbers presented here may not match the results you get when you run the tests in your environment. The numbers are meant to illustrate the capacity planning process. This guide does not provide a ready-reckoner that you can use to find out the hardware resources required for a given scenario. It describes the methodology that you can follow to estimate the resource requirements based on certain tests. Target Audience This document is intended for the following audience groups: Administrators and others in the IT department who are responsible for determining the IT resources required for running WLI applications in the organization. Administrators who are responsible for tuning the environment to maintain and improve performance of WLI applications. Software architects, programmers, administrators who are involved in designing, developing, and deploying WLI applications. Capacity Planning 1-1
6 It is assumed that readers are familiar with the features of WLI. For more information, see the product documentation at The tests that are part of the capacity planning process must be performed by people who are experienced in conducting performance tests and familiar with techniques for statistical analysis. About this Document This information in this guide is organized as follows: Introduction (the chapter that you are currently reading): Contains information about the target audience, the definition of capacity planning, and purposes of capacity planning. Capacity Planning Process: Describes the activities involved in the capacity planning process. Capacity Planning Example: Illustrates the capacity planning process with an example. WLI Tuning: Provides information about tuning the environment to maintain and improve performance of WLI applications. What Is Capacity Planning? Capacity planning is the process of determining the hardware and software resources required for running an application with a given user load at a satisfactory level of service. It is a continuous process that ensures that the required computing resources (primarily, hardware) for an application are available at the right time. While planning for capacity, consider future growth requirements. The number of WLI applications and their scope might increase with time. It is important to determine future requirements, well in advance, so that necessary action can be taken to plan for, procure, and install the required resources. Besides ensuring that adequate resources are available for meeting the performance and scalability requirements, capacity planning can save considerable money and resources for the organization. 1-2 Capacity Planning
7 Purpose of Capacity Planning Purpose of Capacity Planning Capacity planning is necessary for the following purposes: To achieve predictable and consistent performance The load on a system may not always be uniform; it may vary from low, high, to very high. Capacity planning helps you provide users a satisfactory level of service at varying server loads. Even at peak loads, performance can be expected to remain within acceptable limits. If capacity is not planned, and if adequate resources are not available, users may experience significant variation in performance as the load varies. At very high loads, the system may run so slow that users cannot even use the system. To determine scalability characteristics Capacity planning helps in determining the scalability characteristics of the application. If the available hardware resources are not adequate for the current load or if the load is expected to increase in future, system administrators can estimate the additional hardware resources required to maintain or improve performance. To identify current capacity Even in situations where the available hardware resources are adequate for the current load, capacity planning helps in identifying limitations of the existing hardware. To estimate the additional hardware required to support increased demand If the load on the servers is expected to increase due to factors such as business expansion and changes in marketing strategy, capacity planning helps in estimating the additional hardware resources necessary for the increased load. Related Information Best Practices for WebLogic Integration Database Tuning Guide WebLogic Server Performance and Tuning Capacity Planning for Web Services: Metrics, Models, and Methods Menasce-Almeida Guerrilla Capacity Planning: A Tactical Approach to Planning for Highly Scalable Applications and Services Neil J. Gunther The Practical Performance Analyst Neil J. Gunther Capacity Planning 1-3
8 1-4 Capacity Planning
9 CHAPTER 2 Capacity Planning Process The capacity planning process involves several activities. The following sections describe these activities: Design the WLI Application on page 2-2 Tune the Environment on page 2-3 Prepare the Application for Performance Testing on page 2-4 Design the Workload on page 2-4 Design the Load Generation Script on page 2-7 Define the Unit of Work and SLA on page 2-6 Configure the Test Environment on page 2-8 Run Benchmark Tests on page 2-8 Validating the Results Using Little s Law on page 2-12 Run Scalability Tests on page 2-14 Estimate Resource Requirement on page 2-17 Note: The tests described in this guide were conducted in a controlled environment; the numbers presented here may not match the results that you get when you run the tests in your environment. The numbers are meant to illustrate the capacity planning process. Capacity Planning 2-1
10 Design the WLI Application The following are some of the performance-related design issues that architects and developers should keep in mind while designing WLI applications: Invoking JPDs You must use only process controls (not service controls) to invoke subprocesses. Service controls are recommended for invoking only web services and JPDs in a different server or cluster. Using process control does not incur the overhead of a web service stack, unlike service control which has more CPU and input/output cost. Process control callbacks vs. message broker subscriptions Process control callbacks are faster than message broker subscriptions because process control callbacks are routed directly to the JPD instance. Message broker subscriptions with a filter involve database access to map the filter value to the process instance. Note: Dynamic subscriptions offer loose coupling. So you can use dynamic subscription instead of process control callback in design scenarios where loose coupling is required. Persistence flags By design, if your process becomes stateful and the operation does not require the state to be persisted in the database, consider changing the persistence flag to Never or Overflow. Note: Persistence set to Never or Overflow might not work properly in a cluster. Accessing worklists through worklist APIs versus WLI JPD controls Accessing worklists through worklist APIs is faster than accessing them through WLI JPD controls. However, controls are easier to use and program. Accessing the worklist directly through the API avoids the overhead of control runtime as controls internally use worklist APIs. Accessing worklists through JPD controls is recommended in scenarios where process orchestration requires worklist access. For plain worklist manipulations, worklist APIs work faster than JPD controls. JPD state management Stateless JPDs are executed in memory and the states are not persisted; therefore, they provide better performance than stateful JPDs. In a scenario where you do not need information about the previous state of a process, use stateless JPDs. There is no 2-2 Capacity Planning
11 Tune the Environment input/output cost for stateless JPDs as the state is not persisted. Stateless JPDs are inherently more scalable. Callbacks over JMS For an asynchronous process, if the callback location is a WLS JMS queue and is same for all instances of the process, WLI performance is affected under high load conditions. Synchronous vs. asynchronous process An asynchronous process has some latency cost. Consider a task that requires a few hundred milliseconds for completion. If this task is designed as an asynchronous JPD, the overhead of the asynchronous processing infrastructure could increase processing time significantly. If, on the other hand, it is designed as a synchronous JPD, the impact on processing time would not be significant. In general, asynchronous processing provides tremendous value for lengthy processes. Synchronous processing, on the other hand, is better suited for tasks that are expected to take less time. However, there could be applications in which a synchronous process is blocking, which could be a significant bottle neck, that in turn causes the server to use many more threads than neccessary. Note: For more information about design considerations that may affect performance, see Best Practices for WebLogic Integration and WLI Tuning on page B-1. Tune the Environment Performance of a WLI application depends not just on the design of the application, but also on the environment in which it runs. The environment includes the WLI server, the database, the operating system and network, and the JVM. All of these components should be tuned appropriately to extract good performance from the system. Tuning WLI Appropriate settings should be made for parameters such as JDBC data sources, weblogic.wli.documentmaxinlinsize, process tracking level, B2B message tracking level, and Log4j. For more information, see WLI Tuning on page B-1. Tuning the Database Capacity Planning 2-3
12 This includes defining settings for initialization parameters, generation of statistics, disk I/O, indexing, and so on. For more information about tuning the database, see Database Tuning Guide. Tuning the Operating System and Network Proper tuning of the OS and the network improves system performance by preventing the occurrence of error conditions. For more information, see Operating System Tuning in WebLogic Server Performance and Tuning. Tuning the JVM The JVM heap size should be tuned to minimize the time that the JVM takes to perform garbage collection and maximize the number of clients that the server can handle at any given time. For more information, see Tuning Java Virtual Machines in WebLogic Server Performance and Tuning. Prepare the Application for Performance Testing Certain minor changes may need to be made in the application for running the performance tests and for invoking the application through load generator scripts. The extent of change depends on the nature of the application, capability of the load generator, and the outcome that is expected from the capacity planning process. Following are examples of the changes that may be required: If you want to measure end-to-end performance (from process invocation to completion) of an asynchronous JPD without writing callback handlers in the load generator script, you can generate a SyncAsync WSDL for the process and use it in the load generator. This would make the load generator script simple, leaving all the complexity in the server; the focus remains on the server rather than on the load generator script. If the JPD interacts with a third-party software, and if, you do not want to or cannot use that software (to avoid accounting for delay in calling the third-party software), you can use a simulator in place of the third-party software. This ensures that the test results give reliable information about the true performance and capacity of the WLI application. Design the Workload The quality of the result of any performance test depends on the workload that is used. 2-4 Capacity Planning
13 Design the Workload Workload is the amount of processing that the system is expected to complete. It consists of certain applications running in the system with a certain number of users connecting to and interacting with the system. The workload should be designed so that it is as close to the production environment as possible. In a production environment with many concurrent users, not all users may perform the same operation; the operations would vary depending on the users profile and interest. In addition, users may require think time, which is the time required by users to think about possible alternatives and take decisions before triggering an action in the system. A WLI application that has three types of clients web services, JMS, and file may, for example, have a user profile as shown in the following figure. Figure 2-1 Sample User Profile 20% 40% 40% JMS Web services File User behavior varies depending on the type of application. A user of a long-running process may not execute operations in quick succession. The behavior of a batch processing user, on the other hand, may be completely different. The following parameters must be considered while designing the workload: Number of users at different load conditions (low, moderate, and peak) Average number of concurrent users Think time (time that users take to think before performing any action in the system) Profile of the users (operations that they are expected to execute) Peak message-arrival rate Capacity Planning 2-5
14 Message size The next step is to define the unit of work and SLA. Define the Unit of Work and SLA A Service Level Agreement (SLA) is a contract between the service provider and service consumer that defines acceptable (and unacceptable) levels of service. The SLA is typically defined in terms of response time or throughput (transactions per second). For systems of synchronous nature, the aim is to tune the system that achieves highest throughput while meeting the response time (realistically, for about 95% of the transactions). The response time thus becomes the SLA. For systems of asynchronous nature, throughput or messages per second is the SLA. For capacity planning purposes, it is important to define the unit of work (that is, the set of activities included in each transaction), before using it to define the SLA. Consider the purchase order application shown in the following figure. Figure 2-2 Unit of Work: Purchase Order Application Business Transport-A Client Business Transport-A Public Process Warehouse Processor-A Common Client Services PO Processor Business Transport-B Client Business Transport-B Public Process Warehouse Processor-B Each node is a JPD. All of these JPDs are required for processing the purchase order. In this scenario, the unit of work (transaction) can be defined as either of the following: Each JPD that should be executed to process the purchase order. The entire flow of operations, starting from the business transport client to receiving a response from the warehouse processor. It is recommended that the entire flow of business operations, rather than each JPD, be considered a single unit of work. 2-6 Capacity Planning
15 Design the Load Generation Script The next step is to design the load generation script. Design the Load Generation Script A load generation script is required to load the server with the designed workload while running the tests. Note: For information about running the tests, see Run Benchmark Tests on page 2-8 and Run Scalability Tests on page While writing the load generation script, you should keep the following points in mind: Ensure that each user sends a new request only after the previous request is served and a response is received. If the system processes requests in batches, ensure that a new batch of requests is sent only after the previous batch is processed. The load level becomes easier to understand and manage, when you limit each simulated user to a single in-flight request,. If the rate at which requests are sent is not controlled, requests may continue to arrive at the system even beyond the flow-balance rate, leading to issues such as queue overflow. The following figure depicts a single user sending the next request only after the previous request is processed by the server. Figure 2-3 Balanced Load Generation Script With this approach, the arrival rate (load) on the system can be increased by increasing the number of concurrent users, without affecting the system adversely; therefore, the capacity of the system can be measured accurately. The following figure depicts a single user sending new requests without waiting for the server to finish processing previous requests. Capacity Planning 2-7
16 Figure 2-4 Non-blocking Script With Arrival Rate More Than Throughput This approach could cause issues such as queue overflow and lead to misinterpretation of capacity. A balanced load generation script is recommended. Configure the Test Environment The test environment should be configured as described in this section to ensure that the results of the tests are reliable and not affected by external factors. When the tests are conducted, no other process must run in any of the test machines load generator, WLI server, and database apart from the processes that need to be tested. In addition, no other machine or external program must access or use the database machine. OS tasks, such as automatic system update and scheduled jobs, must not interfere with the tests. To prevent network-related issues (slow network or network traffic) from affecting test results, it is recommended that a VLAN be set up with all the machines involved in the tests. This ensures that the test machines are isolated from network traffic originating in or intended for machines that are not involved in the tests. A network speed of 1000 Mbps is recommended for running the tests. The bandwidth must be monitored while the tests are running. Run Benchmark Tests Benchmark tests help in identifying system bottlenecks and tuning the system appropriately. The tests involve increasing the load on the system, in gradual steps, till the throughput does not increase any further. 2-8 Capacity Planning
17 Run Benchmark Tests Note: For the purpose of benchmark tests, load is any aspect of the WLI application under test number of concurrent users, document size, and so on that demands system resources. The load should be increased gradually to ensure that the system has adequate warm-up time. Benchmark tests are run with no think time and with a single WLI machine. When the throughput stops increasing, one of the following may have occurred: Utilization of one of the hardware resources has reached 100%, indicating that the particular resource has become a bottleneck. Utilization has not reached 100% for any of the hardware resources, but the throughput has peaked, indicating that the system requires further tuning to make better use of the available hardware resources. The following figure depicts a Mercury LoadRunner ramp-up schedule in which the initial 10 minutes are for warm-up tests with 10 concurrent users. Subsequently, the load is increased at the rate of 10 additional users every 15 minutes. Capacity Planning 2-9
18 Figure 2-5 Test Ramp-up Schedule The following data must be recorded while running the tests: Application behavior Concurrent user load Response time Throughput for each unit of work Use tools such as Mercury LoadRunner and The Grinder for emulating users and capturing metrics. Resource utilization 2-10 Capacity Planning
19 Run Benchmark Tests CPU utilization Memory footprint Network utilization I/O utilization If the load generation tool does not capture all of this data, use OS-specific utilities such as vmstat, iostat, and perfmon. The following figure shows the result of a benchmark test. Figure 2-6 Results of Benchmark Test Peak TPS TPS starts diminishing Avg. Response Time % CPU utilization Optimal response time Avg. TPS and Avg. CPU Util.% No. of Users Avg. Response Time Avg. TPS Avg. CPU Util.% 0 As users are added, the average TPS increases. When utilization of one of the hardware resources (in this case, CPU) reaches 100%, the average TPS peaks. The response time at this point is the optimal result. When further users are added to the system, the TPS starts diminishing. TPS rises in a near linear fashion as an increasing load is applied until the system is saturated due to a CPU or input/output constraint, then it levels off and begins to fall. Response times rise in a near linear fashion until the saturation point is reached, and then increases in a non-linear fashion. Capacity Planning 2-11
20 This pattern of results indicates a system and typical behavior where resources are utilized to the maximum. The next activity in the capacity planning process is to validate the results of the benchmark tests. Validating the Results Using Little s Law Before analyzing the test results, you must validate them using Little s Law, to identify bottlenecks in the test setup. The test results should not deviate significantly from the result that is obtained when Little s Law is applied. The response-time formula for a multi-user system can be derived by using Little's Law. Consider n users with an average think time of z connected to an arbitrary system with response time r. Each user cycles between thinking and waiting-for-response; so the total number of jobs in the meta-system (consisting of users and the computer system) is fixed at n. n = x (z + r) r = n/x - z n is the load, expressed as the number of users; z + r is the average response time, and x is the throughput. Note: Maintain consistency in defining units. For example, if throughput is expressed in TPS, response time should be expressed in seconds. Tips for Thorough Validation of Results It is recommended that you run benchmarks for at least thirty minutes, and preferably longer. Discard at least the first fifteen minutes of data obtained under load to allow dynamic parts of the system a one time warm up, and applications, database caches, and connection pools to be populated. For example, allow: The JVM to compile hot spots to native code (the Sun JVM), or to optimize the native co de at hot spots (JRockit). The JVM to tune the garbage collector (JRockit dynamically picks the GC strategy, and dynamically tunes serveral key parameters). Application and database caches, connection pools to be populated. Watch for the variability of individual response times and ensure that irregular effects are not skewing the results, for example, if an occasional operating system job is running in 2-12 Capacity Planning
21 Run Benchmark Tests the middle of a test run. A recommended quality metric is the standard deviation of response times divided by the mean response time. Monitor this value for repeated runs, and check that it is reasonably stable. Interpreting the Results While interpreting the results, take care to consider only the steady-state values of the system. Do not include ramp-up and ramp-down time in the performance metrics. When the throughput saturates, utilization of a resource CPU, memory, hard disk, or network must have peaked. If utilization has not peaked for any of the resources, analyze the system for bottlenecks and tune it appropriately. Tips for Analyzing Bottlenecks and Tuning To identify potential bottlenecks related to disk I/O, check whether the disk idle time percentage is low or average disk queue length is consistently high. If this is the case, check for I/O activities caused by the application or WLI, and try to reduce the number of such activities. If this is not the case, then faster hard disks may be required. Monitor I/O activities in the database machine. To identify memory-related bottlenecks, monitor the heap utilization and garbage collection time. In addition, look for the following: Long run-queue for the processor: If this is the case, try reducing the load. High utilization of network bandwidth: If this is the case, upgrade the network speed. If bottlenecks are observed in the load generator machine, try using multiple load generator machines. If no resource bottlenecks exist at the point when throughput saturates, bottlenecks could exist in the application and system parameters. These bottlenecks could be caused by any of the following: WLI Check whether any of the tuning parameters described in WLI Tuning could make a difference. Capacity Planning 2-13
22 Ensure that the application is designed according to the best practices. For more information, see Best Practices for WebLogic Integration. Use profiling tools such as ej-technologies JProfiler, and Quest Software s jprobe to detect CPU or memory issues related to the WLI application. It is recommended that you use JProfiler as jprobe is generally considered too heavy to use under load. Database Ensure that the database is tuned appropriately. For more information, see Database Tuning Guide. Look for database performance issues by using tools such as statspack and Oracle Performance Manager. OS and network Ensure that the operating system and network parameters are tuned appropriately. For more information, see WebLogic Server Performance and Tuning. JVM options JVM parameters can have a significant impact on performance. For more information, see Tuning Java Virtual Machines in WebLogic Server Performance and Tuning. Use tools such as Sun Java JConsole and BEA JRockit Runtime Analyzer to detect issues related to the JVM. Run Scalability Tests A system can be considered scalable, when adding additional hardware resources consistently provides a commensurate increase in performance. Such a system can handle increased load without degradation in performance. To handle the increased load, hardware resources may need to be added. Applications can be scaled horizontally by adding machines and vertically by adding resources (such as CPUs) to the same machine Capacity Planning
23 Run Scalability Tests Horizontal and Vertical Scaling The following table compares the relative advantages of horizontal and vertical scaling: Table 2-1 Relative Advantages of Horizontal and Vertical Scaling Vertical Scaling (More resources in a single machine) Facilitates easy administration. Improves manageability. Provides more effective interconnection between system resources. Horizontal Scaling (More machines) Offers high availability. No scalability ceiling. When an application needs to be scaled, you may opt for horizontal scaling, vertical scaling, or a combination, depending on your requirements. Horizontal scaling is often appropriate for organizations that decide to invest in low-cost servers initially and acquire more machines as the load increases. However, it involves additional costs for load balancing and system administration. An additional advantage of Horizontal scaling is that it suffers less from scalability ceilings that are expensive to address. With Vertical scaling, you cannot scale past the point that your machine is fully loaded without discarding it for another, larger machine. Vertical scaling requires a high-end machine that allows easy addition of resources when the need arises. A combination of horizontal and vertical scaling is sometimes the best solution; it helps organizations benefit from the relative advantages of both approaches. The following figure shows a comparison between WLI running on a single non-clustered 4-CPU machine (vertical scaling) and on two clustered 2-CPU machines (horizontal scaling). Capacity Planning 2-15
24 Figure 2-7 Horizontal and Vertical Scaling Avg. TPS x 2 CPU 1 x 4 CPU No. of users Performance in the horizontal scaling scenario (two 2-CPU machines) is slightly lower than in the vertical scaling scenario (single 4-CPU machine) due to additional load balancing and clustering overhead in the horizontal scaling scenario. However, you can add additional machines to increase the capacity of the horizontally scaled system. This is not possible with a vertically scaled system. Conducting Scalability Tests Scalability tests help you find out how the application scales when additional resources are added in the system horizontally and vertically. This information is useful for estimating the additional hardware resources required for a given scenario. The scalability test involves increasing the load, in gradual steps, till the SLA is achieved or the target resource utilization is reached, whichever occurs first. Note: In contrast, benchmark tests involve increasing the load till the throughput stops increasing. For running scalability tests, the workload should be designed to emulate, as closely as possible, the production scenario. If no human user interaction is necessary and if the process invocations 2-16 Capacity Planning
25 Estimate Resource Requirement happen programmatically, it is recommended that you use a zero-think-time approach, similar to the approach for benchmark tests. If the target resource utilization level is reached before the SLA is achieved, additional resources must be added to the system. The additional resources (vertical scaling) or machines (horizontal scaling) must be added in the order 1, 2, 4, 8, and so on. Note: A minimum of three data points must be used to derive the equation for estimating capacity. All the data that was recorded while running benchmark tests must be captured while running the scalability test. For more information, see Run Benchmark Tests on page 2-8. Note: Only the data that is recorded when the resource utilization is closest to the target level must be used to estimate the additional resource requirement. After running the test, validate and analyze them as described for benchmark tests, and then, if required, estimate the additional resource requirement as described in the next section. Estimate Resource Requirement A Capacity Plan helps estimate the current and future resource requirements for the current SLA and for future loads. You need to create the load model of the system in order to create a capacity plan. The test results provide the data points to create this load model. You can derive an equation for the curve obtained from the test results, and use it to estimate the additional hardware resources that are required. Use techniques such as linear regression and curve fitting to predict the required resources. You can implement these techniques using spreadsheet applications such as Microsoft Excel. Note: The accuracy of the prediction depends on the correctness of the load model. The load model should be based on each relevant resource for your application such as CPU. You can also validate the model against the available historical performance data. The following figure shows the results of a horizontal scalability test. Capacity Planning 2-17
26 Figure 2-8 Capacity Estimation: Horizontal Scaling y = x R 2 = Avg. TPS Avg. TPS Linear (Avg. TPS) No. of nodes in cluster (1 node/machine) The graph shows the average number of transactions per second (TPS) at 70% CPU utilization for clusters with varying number of nodes. For the results of this scalability test, a linear equation is the best fit. In a best fit curve, R 2 must approach unity (the value, 1). The equation is y = x , where y is the average TPS and x is the number of nodes. Note: Though adding additional resources horizontally or vertically can result in a higher TPS, this may not be useful if the objective is to achieve a certain response time. In such cases, consider using faster CPUs. Based on the results of the scalability tests and the tuning that is necessary for achieving the required results, you should configure the application for deployment to the production environment. Note: If the resources that you decide to purchase for the production environment are not of the same type or model as those used for the scalability tests, you can estimate the resource requirement by using the following formula: E x T1 / T2 E = Estimation from scalability tests T1 = SPECint rate of the machine on which the test was executed T2 = SPECint rate of the machine that you want to purchase 2-18 Capacity Planning
27 Estimate Resource Requirement This formula is applicable only if the scaling is based on number of CPUs. The extrapolation using SPECint rate is only an approximation. The Capacity Planning exercise is best conducted with the same hardware and configuration as the production environment. For more information about SPECint rates, see Capacity Planning 2-19
28 2-20 Capacity Planning
29 APPENDIX A Capacity Planning Example This example is based on a simple Purchase Order application that processes the following Business Object Documents (BODs) defined according to Open Applications Group (OAG) standards: A purchase order (OAG 003_process_po_007) An acknowledgement (004_acknowledge_po_008) Note: The tests described in this guide were conducted in a controlled environment; so the numbers presented here may not match the results that you get when you run the tests in your environment. The numbers are meant to illustrate the capacity planning process. Capacity Planning A-1
30 The following figure shows the configuration of the application. Figure A-1 Configuration of Sample Purchase Order Application PO JMS Driver JPD JMS EG Common Client Services JPD JMS EG PO JMS Transport JPD Process Control PO Processor JPD Service Broker Control PO Warehouse JPD PO WS Driver JPD Process Control PO WS Transport JPD Process Control Stateful JPD Stateless JPD Asynchronous request Asynchronous response Synchronous request/response The application has the following components: Two business transport clients: PO JMS Driver JPD PO WS Driver JPD Two business transport public processes: PO JMS Transport JPD PO WS Transport JPD A purchase order processor (PO Processor JPD), which accepts a purchase order, iterates through the line items in it, and returns a callback. A warehouse processor (PO Warehouse JPD). Each line item in the purchase order has a description, which is mapped to a specific warehouse by using dynamic control properties. Common services for logging, business-level document validation, and so on. A-2 Capacity Planning
31 Workload and Load Generation Script Workload and Load Generation Script In this example, the workload is designed with zero think time because the process is automated (no human user interaction) and the goal is in terms of transactions processed per second. For this example, a mix of 50% PO WS Driver JPD users and 50% PO JMS Driver JPD users is considered. A Mercury LoadRunner-based Java program is used as a load generation client for the tests. The client triggers the Driver JPDs by using SOAP/HTTP and waits for a callback before sending the next request. Unit of Work and SLA The entire flow of operations for processing the purchase order invoking the driver JPD, sending the purchase order document, and receiving an acknowledgement document is considered a single unit of work. The target SLA for this example is 50 purchase orders processed per second, with a maximum average CPU utilization of 50%. Test Setup The following sections describe the hardware and software configurations for executing the scalability tests. Capacity Planning A-3
32 Horizontal Scalability Figure A-2 Test Setup: Horizontal Scalability WLI Server 1 LoadRunner Load Balancer WLI Server 2 Database WLI Server 3 Vertical Scalability WLI Server 4 Figure A-3 Test Setup: Vertical Scalability LoadRunner WLI Server Database A-4 Capacity Planning
33 Test Setup Hardware Configuration Table A-1 Hardware Configuration Purpose Model OS CPU RAM Hard Drive Network Interface WLI (Horizontal Scalability) Dell 6850 Windows Server 2003, Enterprise Edition 2 x 3.4 GHz Intel Xeon Dual Core 16 GB 5 x 146 GB; 10K RPM SCSI 1000 Mbps Ethernet WLI (Vertical Scalability) Dell 6850 Windows Server 2003, Enterprise Edition 4 x 3.4 GHz Intel Xeon Dual Core 16 GB 5 x 146 GB; 10K RPM SCSI 1000 Mbps Ethernet Database HP DL 580 G4 Windows Server 2003, Enterprise Edition 4 x 3.4 GHz Intel Xeon Dual Core 16 GB 8 x 72 GB 15K RPM SCSI with 512 MB battery-backed write cache 1000 Mbps Ethernet Load Generator Dell 1850 Windows Server 2003, Enterprise Edition 2 x 3.0 GHz Intel Xeon HT 4 GB 2 x 146 GB SCSI 1000 Mbps Ethernet HP DL 360 Windows Server 2003, Enterprise Edition 2 x 3.0 GHz Intel Xeon HT 4 GB 1 x 136 GB SCSI 1000 Mbps Ethernet Load Balancer (Horizontal Scalability) F5 BIG IP Mbps Ethernet Software Configuration: Horizontal Scalability WLI HTTP logging: Off Log4j tuning: Log level 'Error' Server log levels: Error (or Warn, if that is the highest level) Capacity Planning A-5
34 JDBC drivers Oracle Thin Driver for cgdatasource-nonxa and cgdatasource Oracle Thin XA for all other data sources JVM: BEA JRockit 1.5.0_06-b05 JVM parameters: -Xms 1536m -Xmx 1536m -Xgc:parallel cgdatasource: InitialCapacity = MaxCapacity = 75 LLR enabled System configuration: Reporting data stream: Disabled Default tracking level: None Default reporting data policy: Off Default variable tracking level: Off Reliable tracking: Off Reliable reporting: Off WLI JMS control: send-with-xa = True JMS persistence: Off (Non-persistent queues used for AsyncDispatcher and wli.internal.instance.info.buffer) Load generator: Mercury LoadRunner 8.1 FP4 Big IP load balancer: Server configured to type 'Performance (Layer 4)' Software Configuration: Vertical Scalability The software configuration for vertical scalability tests is the same as for horizontal scalability tests. Database Configuration Database: Oracle 9i Archive log disabled. A-6 Capacity Planning
35 Test Setup Separate tablespace for WLI, WLI archive, worklist, indexes, and JPD BLOB in different hard disks. Database machine started with the /3 GB switch. Init.ora parameters: *.aq_tm_processes=1 *.background_dump_dest='d:\oracle\admin\perfdb2\bdump' *.compatible=' ' *.control_files='d:\oracle\oradata\perfdb2\control01.ctl','d:\oracle\ oradata\perfdb2\control02.ctl','d:\oracle\oradata\perfdb2\control03. CTL' *.core_dump_dest='d:\oracle\admin\perfdb2\cdump' *.db_block_size=8192 *.db_cache_size= *.db_domain='' *.db_file_multiblock_read_count=16 *.db_name='perfdb2' *.dispatchers='(protocol=tcp) (SERVICE=PERFDB2XDB)' *.fast_start_mttr_target=300 *.hash_join_enabled=true *.instance_name='perfdb2' *.java_pool_size= *.job_queue_processes=10 *.large_pool_size= *.open_cursors=300 *.pga_aggregate_target= *.processes=1000 *.query_rewrite_enabled='false' *.remote_login_passwordfile='exclusive' *.shared_pool_size= *.sort_area_size= *.star_transformation_enabled='false' *.timed_statistics=true Capacity Planning A-7
36 *.undo_management='auto' *.undo_retention=10800 *.undo_tablespace='undotbs1' *.user_dump_dest='d:\oracle\admin\perfdb2\udump' A-8 Capacity Planning
37 Scalability Test Results Scalability Test Results The following sections describe the results of the scalability tests. The following figure shows the results of the vertical scalability tests. Figure A-4 Results of Vertical Scalability Tests The tests were executed with 1, 2, and 4 CPUs for WLI. The load was increased gradually till the CPU utilization reached 50%. The graph shows that the SLA of 50 purchase orders processed per second with a maximum average CPU utilization of 50% cannot be achieved with four CPUs. You can fit a curve for the results obtained from the tests, derive an equation for the curve, and use it to estimate the additional hardware resources required, as described in Capacity Estimation on page A-10. Capacity Planning A-9
38 The following figure shows the results of the horizontal scalability tests. Figure A-5 Results of Horizontal Scalability Tests The tests were executed with 1, 2, and 4 machines, with one managed server in each machine. The graph shows the average TPS at 50% CPU utilization. The graph shows that a 4-node WLI cluster (with one machine per node) can process purchase orders at 50% CPU utilization. Capacity Estimation In the example described here, the required SLA was achieved with the available hardware resources in the horizontal scaling scenario, whereas it was not achieved in the vertical scaling scenario. If the required SLA is not achieved, you can fit a curve for the results obtained from the tests, derive an equation for the curve, and use it to estimate the additional hardware resources required. A-10 Capacity Planning
39 Capacity Estimation The following figure shows capacity estimation in the vertical scaling scenario. Figure A-6 Capacity Estimation: Vertical Scaling For this example, the equation is y = x , where y is the average TPS and x is the number of CPUs. The curve indicates that WLI applications scale linearly. Note: This equation is based on the test setup described in this guide. The equation that you must use to estimate capacity would depend on your test setup, nature of your application, and the SLA. In addition, the tests described in this document assume an environment where database- and database I/O-related bottlenecks do not exist. Capacity Planning A-11
40 The following figure shows capacity estimation in the horizontal scaling scenario. Figure A-7 Capacity Estimation: Horizontal Scaling The equation is y = x , where y is the average TPS and x is the number of nodes in the cluster. Note: This equation is based on the test setup described in this guide. The equation that you must use to estimate capacity would depend on your test setup, nature of your application, and the SLA. In addition, the tests described in this document assume an environment where database- and database I/O-related bottlenecks do not exist. A-12 Capacity Planning
41 APPENDIX B WLI Tuning This section describes the WLI parameters that you can tune in a production environment. This section discusses the following topics: BPM Logging Levels JDBC Data Source Tuning AsyncDispatcher MDB Tuning HTTP Control Internal Queues JMS EG Tuning Document Size Process Tracking B2B Process Tracking Worklist Reporting and Querying JMS Control Tuning Test Console Capacity Planning Guide B-1
42 BPM Logging Levels The ApacheLog4jCfg.xml file located in the WLI domain directory specifies the WLI BPM logging configuration and the logging levels. You must set priority levels to debug or info only if such messages are required for debugging and testing purposes. In a production environment, you must set priority values to warn or error for all log appenders. You must set the debug levels in the WebLogic Server console as the Worklist application creates a log on WebLogic Server as well as Log4j. See JMS Server: Logging for more information. JDBC Data Source Tuning WLI uses the following JDBC data sources: cgdatasource A data source used for storing JPD conversation state, process tracking information, and logging last resource information. cgdatasource-nonxa A non-xa data source used for persistent store of internal WLI queues. p13ndatasource A data source used by worklist. bpmarchdatasource A data source for archiving database tables. You must monitor these JDBC data sources and ensure that an adequate number of connections are available in the connection pool by modifying the values of Initial Capacity, Maximum Capacity, and Capacity Increment. It is recommended that InitialCapacity be set to MaxCapacity for your connection pool, so there is no time spent in growing or shrinking the pool. If your application is heavily database driven, consider the following: Pinned-To-Thread property To minimize the time that an application takes to reserve a database connection from a data source and to eliminate contention between threads for a database connection, you can enable the Pinned-To-Thread property for the data source. However, when this property is set, the connection remains with the execute thread and is not returned to the connection pool. If the pool does not contain any connection, more connections are created. These connections are more than what you specified in MaxCapacity. Therefore, use the PinnedToThread property only if it improves performance. For more information, see Using Pinned-To-Thread Property to Increase Performance in WebLogic Server Performance and Tuning. B-2 Capacity Planning Guide
43 AsyncDispatcher MDB Tuning Statement cache When you use a prepared statement or callable statement in an application or EJB, there is considerable processing overhead for the communication between the application server and the database server and on the database server itself. To minimize the processing costs, WLS can cache the prepared and callable statements used in your applications. It is recommended that you use Least Recently Used (LRU) algorithms and set cache size to the number of statements executed in your application. Determining the exact cache size is a trial-and-error process. For more information, see Increasing Performance with the Statement Cache and Tuning Data Source Connection Pool Options in Configuring and Managing WebLogic JDBC. AsyncDispatcher MDB Tuning The default WebLogic Server work manager configuration constrains the number of AsyncDispatcher requests being processed for a particular JPD project to a maximum of 16. Consequently, a maximum number of 16 threads can handle asynchronous requests for the project. Your system may not use this number of threads for CPU bound processing, as WebLogic Server tunes the number of threads dynamically to optimise throughput. However, for input/output bound processing, which is typical for WLI applications, you can remove the constraint by configuring a separate work manager for the project. You can assign more threads by adding a separate workmanager to AsyncDispatcher, so more AsyncDispatcher MDB instances are created. The same AsyncDispatcher MDB is used by all JPDs in a JPD project. In the following scenarios, consider splitting the project into smaller projects: You start multiple JPDs of the same project concurrently. AsyncDispatcher is under high load conditions. For more information, see Determining the Number of Concurrent MDBs in Tuning Message-Driven Beans. HTTP Control The default HTTP-pooled connections per host is 2. To increase the pool size per host, you can use the following system property: wli.httpcontrolmaxconnectionsperhost Capacity Planning Guide B-3
44 The default HTTP-pooled connections for all hosts is 20. To increase the pool size for all hosts, you can use the following system property: wli.httpcontrolmaxtotalconnections Internal Queues WLI uses many internal queues in the conversational-jms module. By default, most of these queues use a persistent JDBC store. If the queues do not require JDBC persistence, you can change it to 'Non-Persistent' or use the File Persistence parameter. The wli.internal.instance.info.buffer queue used for process instance information can significantly improve the performance of WLI when the queue is changed to Non-Persistent. Note: Changing the persistence of a queue can cause data loss if the server or the machine crashes unexpectedly. For more information, see Tuning the WebLogic Persistent Store and Tuning WebLogic JMS in WebLogic Server Performance and Tuning. JMS EG Tuning By default, the JMS EG application created in the WLI domain directory has 16 MDB instances due to self-tuning. You can try to increase the pool size to improve performance. If you want more than 16 MDB instances, you can assign a workmanager with a higher number of threads in the minimum and maximum thread constraints as the dispatch-policy of the EG. For more information, see Determining the Number of Concurrent MDBs in Tuning Message-Driven Beans. Document Size The weblogic.wli.documentmaxinlinsize parameter in wli-config.properties in the WLI domain directory specifies the maximum size of a document stored in the SQL Document Store. If a document (incoming message) is greater than the value specified in weblogic.wli.documentmaxinlinsize, only document references are included in the message. Otherwise, the document is passed as a copy. If the message is large, the document store is contacted more frequently, thereby increasing database I/Os and decreasing memory usage. Therefore, the value of the threshold is important. B-4 Capacity Planning Guide
45 Process Tracking You can determine the size of your message, using Message Tracking. Tune the threshold value to improve performance. If a document is exchanged among many JPDs, you can use a document store to pass the relevant document reference on to the JPDs instead of passing the same data on to multiple JPDs. Process Tracking Table B-1 lists the different levels of process tracking and the events generated. From performance perspective, the system performance degrades as we move from none to full level of process tracking. Table B-1 Levels of Process Tracking Level NONE MINIMUM NODE FULL Description No event is generated. Only global events, such as process start/end, suspend, and resume are recorded. Each executed node generates a start node event and an end/abort node event. NODE and events that result from the execution of JpdContext.log() are generated. If you change NONE to FULL, many more events are generated. These events lead to more database operations towards process instance information and process tracking data. You can partition the WLI_PROCESS_EVENT table, which is affected by higher levels of process tracking. For more information about partitioning the WLI_PROCESS_EVENT table in Oracle9i Database Tuning Guide. For more information about process tracking, see Process Tracking Data in Using the WebLogic Integration Administration Console. B2B Process Tracking For trading partner integration, the following levels of message tracking are provided: None Metadata All Capacity Planning Guide B-5
46 When you change the message tracking level from None to All, the database operations can have an impact on performance. For more information, see Configuring Trading Partner Management in Using the WebLogic Integration Administration Console. Worklist Reporting and Querying In a scenario where database load is high, the worklist-reporting feature can have an impact on performance. This feature requires additional database operations for reporting. It is recommended that you move the reporting store to a different database machine by pointing a separate reporting datasource to a database machine other than p13ndatasource. For more information about the default reporting store, see Default Reporting Store in Using Worklist. Do not try to fetch all worklist task data at once. Your system slows down when the database contains a lot of tasks. You can use WorklistTaskQuery.getTaskDataCursor to fetch a maximum of 1000 tasks from the database. WorklistTaskDataCursor fetches tasks in batches, as specified in batchsize. JMS Control Tuning If you are using WLI JMS control, you can set the send-with-xa annotation of the WLI JMS control to true for better performance with WLI. Test Console The test console stores all logs in the JVM heap. You must clear the logs so that system performance is not affected. By default, the test console (testconsoleflag in the setdomainenv file) is enabled for a development domain and disabled for a production domain. Do not enable the test console for production systems unless required for debugging purposes. JPD Meta Data Cache Tuning To control the behaviour of the JPD process meta-data cache you can do either of the following: Set the wlw.softdispreferences system property to allow caching the meta-data using soft-references so that these references are relinquished when there is a memory crunch. B-6 Capacity Planning Guide
47 JPD Meta Data Cache Tuning The only drawback is that soft-references can significantly increase garbage collection times. Set the following optional com.bea.wli.dispatcher.cachesize property in the JAVA_PROPERTIES of setdomainenv.cmd/sh variable. This is a numeric value representing the boundary limits of the cache, which is interpreted as follows: A value of -1 represents an unbounded cache. All the meta-data of the JPDs is cached. A value of 0 disables the cache completely. None of the meta-data of the JPDs is cached. A positive non-zero value represents a bounded cache. If the value is 'n', then the meta-data of 'n' last accessed JPDs is cached. Capacity Planning Guide B-7
48 B-8 Capacity Planning Guide
An Oracle White Paper March 2013. Load Testing Best Practices for Oracle E- Business Suite using Oracle Application Testing Suite
An Oracle White Paper March 2013 Load Testing Best Practices for Oracle E- Business Suite using Oracle Application Testing Suite Executive Overview... 1 Introduction... 1 Oracle Load Testing Setup... 2
JBoss Seam Performance and Scalability on Dell PowerEdge 1855 Blade Servers
JBoss Seam Performance and Scalability on Dell PowerEdge 1855 Blade Servers Dave Jaffe, PhD, Dell Inc. Michael Yuan, PhD, JBoss / RedHat June 14th, 2006 JBoss Inc. 2006 About us Dave Jaffe Works for Dell
Oracle WebLogic Server 11g Administration
Oracle WebLogic Server 11g Administration This course is designed to provide instruction and hands-on practice in installing and configuring Oracle WebLogic Server 11g. These tasks include starting and
CHAPTER 1 - JAVA EE OVERVIEW FOR ADMINISTRATORS
CHAPTER 1 - JAVA EE OVERVIEW FOR ADMINISTRATORS Java EE Components Java EE Vendor Specifications Containers Java EE Blueprint Services JDBC Data Sources Java Naming and Directory Interface Java Message
Performance Analysis of Web based Applications on Single and Multi Core Servers
Performance Analysis of Web based Applications on Single and Multi Core Servers Gitika Khare, Diptikant Pathy, Alpana Rajan, Alok Jain, Anil Rawat Raja Ramanna Centre for Advanced Technology Department
Informatica Data Director Performance
Informatica Data Director Performance 2011 Informatica Abstract A variety of performance and stress tests are run on the Informatica Data Director to ensure performance and scalability for a wide variety
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect [email protected] Validating if the workload generated by the load generating tools is applied
KillTest. http://www.killtest.cn 半 年 免 费 更 新 服 务
KillTest 质 量 更 高 服 务 更 好 学 习 资 料 http://www.killtest.cn 半 年 免 费 更 新 服 务 Exam : 1Z0-599 Title : Oracle WebLogic Server 12c Essentials Version : Demo 1 / 10 1.You deploy more than one application to the
Oracle Weblogic. Setup, Configuration, Tuning, and Considerations. Presented by: Michael Hogan Sr. Technical Consultant at Enkitec
Oracle Weblogic Setup, Configuration, Tuning, and Considerations Presented by: Michael Hogan Sr. Technical Consultant at Enkitec Overview Weblogic Installation and Cluster Setup Weblogic Tuning Considerations
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
Agility Database Scalability Testing
Agility Database Scalability Testing V1.6 November 11, 2012 Prepared by on behalf of Table of Contents 1 Introduction... 4 1.1 Brief... 4 2 Scope... 5 3 Test Approach... 6 4 Test environment setup... 7
Business white paper. HP Process Automation. Version 7.0. Server performance
Business white paper HP Process Automation Version 7.0 Server performance Table of contents 3 Summary of results 4 Benchmark profile 5 Benchmark environmant 6 Performance metrics 6 Process throughput 6
MagDiSoft Web Solutions Office No. 102, Bramha Majestic, NIBM Road Kondhwa, Pune -411048 Tel: 808-769-4605 / 814-921-0979 www.magdisoft.
WebLogic Server Course Following is the list of topics that will be covered during the course: Introduction to WebLogic What is Java? What is Java EE? The Java EE Architecture Enterprise JavaBeans Application
Informatica Master Data Management Multi Domain Hub API: Performance and Scalability Diagnostics Checklist
Informatica Master Data Management Multi Domain Hub API: Performance and Scalability Diagnostics Checklist 2012 Informatica Corporation. No part of this document may be reproduced or transmitted in any
Holistic Performance Analysis of J2EE Applications
Holistic Performance Analysis of J2EE Applications By Madhu Tanikella In order to identify and resolve performance problems of enterprise Java Applications and reduce the time-to-market, performance analysis
ILOG JRules Performance Analysis and Capacity Planning
ILOG JRules Performance Analysis and Capacity Planning Version 1. Last Modified: 25-9-31 Introduction JRules customers and evaluators often ask fundamental questions such as: How fast is the rule engine?,
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Practical Performance Understanding the Performance of Your Application
Neil Masson IBM Java Service Technical Lead 25 th September 2012 Practical Performance Understanding the Performance of Your Application 1 WebSphere User Group: Practical Performance Understand the Performance
A Scalability Study for WebSphere Application Server and DB2 Universal Database
A Scalability Study for WebSphere Application and DB2 Universal Database By Yongli An, Tsz Kin Tony Lau, and Peter Shum DB2 Universal Database Performance & Advanced Technology IBM Toronto Lab, IBM Canada
Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server
Performance brief for IBM WebSphere Application Server.0 with VMware ESX.0 on HP ProLiant DL0 G server Table of contents Executive summary... WebSphere test configuration... Server information... WebSphere
JBoss Data Grid Performance Study Comparing Java HotSpot to Azul Zing
JBoss Data Grid Performance Study Comparing Java HotSpot to Azul Zing January 2014 Legal Notices JBoss, Red Hat and their respective logos are trademarks or registered trademarks of Red Hat, Inc. Azul
Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7
Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:
Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition
Liferay Portal Performance Benchmark Study of Liferay Portal Enterprise Edition Table of Contents Executive Summary... 3 Test Scenarios... 4 Benchmark Configuration and Methodology... 5 Environment Configuration...
An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide
Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.
Tuning WebSphere Application Server ND 7.0. Royal Cyber Inc.
Tuning WebSphere Application Server ND 7.0 Royal Cyber Inc. JVM related problems Application server stops responding Server crash Hung process Out of memory condition Performance degradation Check if the
Code:1Z0-599. Titre: Oracle WebLogic. Version: Demo. Server 12c Essentials. http://www.it-exams.fr/
Code:1Z0-599 Titre: Oracle WebLogic Server 12c Essentials Version: Demo http://www.it-exams.fr/ QUESTION NO: 1 You deploy more than one application to the same WebLogic container. The security is set on
Tableau Server Scalability Explained
Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted
Tomcat Tuning. Mark Thomas April 2009
Tomcat Tuning Mark Thomas April 2009 Who am I? Apache Tomcat committer Resolved 1,500+ Tomcat bugs Apache Tomcat PMC member Member of the Apache Software Foundation Member of the ASF security committee
WebSphere Architect (Performance and Monitoring) 2011 IBM Corporation
Track Name: Application Infrastructure Topic : WebSphere Application Server Top 10 Performance Tuning Recommendations. Presenter Name : Vishal A Charegaonkar WebSphere Architect (Performance and Monitoring)
WEBLOGIC ADMINISTRATION
WEBLOGIC ADMINISTRATION Session 1: Introduction Oracle Weblogic Server Components Java SDK and Java Enterprise Edition Application Servers & Web Servers Documentation Session 2: Installation System Configuration
Oracle WebLogic Server 11g: Monitor and Tune Performance
D61529GC10 Edition 1.0 March 2010 D66055 Oracle WebLogic Server 11g: Monitor and Tune Performance Student Guide Author Shankar Raman Technical Contributors and Reviewer s Werner Bauer Nicole Haba Bala
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
MAGENTO HOSTING Progressive Server Performance Improvements
MAGENTO HOSTING Progressive Server Performance Improvements Simple Helix, LLC 4092 Memorial Parkway Ste 202 Huntsville, AL 35802 [email protected] 1.866.963.0424 www.simplehelix.com 2 Table of Contents
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009
Liferay Performance Tuning
Liferay Performance Tuning Tips, tricks, and best practices Michael C. Han Liferay, INC A Survey Why? Considering using Liferay, curious about performance. Currently implementing and thinking ahead. Running
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
JVM Performance Study Comparing Oracle HotSpot and Azul Zing Using Apache Cassandra
JVM Performance Study Comparing Oracle HotSpot and Azul Zing Using Apache Cassandra January 2014 Legal Notices Apache Cassandra, Spark and Solr and their respective logos are trademarks or registered trademarks
http://support.oracle.com/
Oracle Primavera Contract Management 14.0 Sizing Guide October 2012 Legal Notices Oracle Primavera Oracle Primavera Contract Management 14.0 Sizing Guide Copyright 1997, 2012, Oracle and/or its affiliates.
Contents Introduction... 5 Deployment Considerations... 9 Deployment Architectures... 11
Oracle Primavera Contract Management 14.1 Sizing Guide July 2014 Contents Introduction... 5 Contract Management Database Server... 5 Requirements of the Contract Management Web and Application Servers...
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
Solutions for detect, diagnose and resolve performance problems in J2EE applications
IX Konferencja PLOUG Koœcielisko PaŸdziernik 2003 Solutions for detect, diagnose and resolve performance problems in J2EE applications Cristian Maties Quest Software Custom-developed J2EE applications
Performance Tuning for Oracle WebCenter Content 11g: Strategies & Tactics CHRIS ROTHWELL & PAUL HEUPEL FISHBOWL SOLUTIONS, INC.
Performance Tuning for Oracle WebCenter Content 11g: Strategies & Tactics CHRIS ROTHWELL & PAUL HEUPEL FISHBOWL SOLUTIONS, INC. i Fishbowl Solutions Notice The information contained in this document represents
Cloud Based Application Architectures using Smart Computing
Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products
Tableau Server 7.0 scalability
Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different
Performance Testing and Optimization in Web-Service Based Applications
Performance Testing and Optimization in Web-Service Based Applications Mesfin Mulugeta [email protected] Sr. Software Performance Engineer Goals of the Presentation Brief introduction to software
Recommendations for Performance Benchmarking
Recommendations for Performance Benchmarking Shikhar Puri Abstract Performance benchmarking of applications is increasingly becoming essential before deployment. This paper covers recommendations and best
White Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design
PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions Slide 1 Outline Principles for performance oriented design Performance testing Performance tuning General
Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
Web Application s Performance Testing
Web Application s Performance Testing B. Election Reddy (07305054) Guided by N. L. Sarda April 13, 2008 1 Contents 1 Introduction 4 2 Objectives 4 3 Performance Indicators 5 4 Types of Performance Testing
Performance Workload Design
Performance Workload Design The goal of this paper is to show the basic principles involved in designing a workload for performance and scalability testing. We will understand how to achieve these principles
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
Oracle Primavera P6 Enterprise Project Portfolio Management Performance and Sizing Guide. An Oracle White Paper October 2010
Oracle Primavera P6 Enterprise Project Portfolio Management Performance and Sizing Guide An Oracle White Paper October 2010 Disclaimer The following is intended to outline our general product direction.
Oracle WebLogic Server Monitoring and Performance Tuning
Oracle WebLogic Server Monitoring and Performance Tuning Duško Vukmanović Principal Sales Consultant, FMW Stuck Threads A Label given to threads not returned to thread pool after
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...
System Models for Distributed and Cloud Computing
System Models for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Classification of Distributed Computing Systems
System Requirements Table of contents
Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5
Throughput Capacity Planning and Application Saturation
Throughput Capacity Planning and Application Saturation Alfred J. Barchi [email protected] http://www.ajbinc.net/ Introduction Applications have a tendency to be used more heavily by users over time, as the
Accelerating Server Storage Performance on Lenovo ThinkServer
Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
Performance Testing of Java Enterprise Systems
Performance Testing of Java Enterprise Systems Katerina Antonova, Plamen Koychev Musala Soft Why Performance Testing? Recent studies by leading USA consultancy companies showed that over 80% of large corporations
Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison
April 23 11 Aviation Parkway, Suite 4 Morrisville, NC 2756 919-38-28 Fax 919-38-2899 32 B Lakeside Drive Foster City, CA 9444 65-513-8 Fax 65-513-899 www.veritest.com [email protected] Microsoft Windows
NetIQ Access Manager 4.1
White Paper NetIQ Access Manager 4.1 Performance and Sizing Guidelines Performance, Reliability, and Scalability Testing Revisions This table outlines all the changes that have been made to this document
Performance Analysis and Capacity Planning Whitepaper
Performance Analysis and Capacity Planning Whitepaper Contents P E R F O R M A N C E A N A L Y S I S & Executive Summary... 3 Overview... 3 Product Architecture... 4 Test Environment... 6 Performance Test
Winning the J2EE Performance Game Presented to: JAVA User Group-Minnesota
Winning the J2EE Performance Game Presented to: JAVA User Group-Minnesota Michelle Pregler Ball Emerging Markets Account Executive Shahrukh Niazi Sr.System Consultant Java Solutions Quest Background Agenda
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
PERFORMANCE TESTING. New Batches Info. We are ready to serve Latest Testing Trends, Are you ready to learn.?? START DATE : TIMINGS : DURATION :
PERFORMANCE TESTING We are ready to serve Latest Testing Trends, Are you ready to learn.?? New Batches Info START DATE : TIMINGS : DURATION : TYPE OF BATCH : FEE : FACULTY NAME : LAB TIMINGS : Performance
XTM Web 2.0 Enterprise Architecture Hardware Implementation Guidelines. A.Zydroń 18 April 2009. Page 1 of 12
XTM Web 2.0 Enterprise Architecture Hardware Implementation Guidelines A.Zydroń 18 April 2009 Page 1 of 12 1. Introduction...3 2. XTM Database...4 3. JVM and Tomcat considerations...5 4. XTM Engine...5
Tuning Your GlassFish Performance Tips. Deep Singh Enterprise Java Performance Team Sun Microsystems, Inc.
Tuning Your GlassFish Performance Tips Deep Singh Enterprise Java Performance Team Sun Microsystems, Inc. 1 Presentation Goal Learn tips and techniques on how to improve performance of GlassFish Application
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
What Is Specific in Load Testing?
What Is Specific in Load Testing? Testing of multi-user applications under realistic and stress loads is really the only way to ensure appropriate performance and reliability in production. Load testing
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha
11.1 inspectit. 11.1. inspectit
11.1. inspectit Figure 11.1. Overview on the inspectit components [Siegl and Bouillet 2011] 11.1 inspectit The inspectit monitoring tool (website: http://www.inspectit.eu/) has been developed by NovaTec.
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
Exam : Oracle 1Z0-108. : Oracle WebLogic Server 10gSystem Administration. Version : DEMO
Exam : Oracle 1Z0-108 Title : Oracle WebLogic Server 10gSystem Administration Version : DEMO 1. Scenario : A single tier WebLogic cluster is configured with six Managed Servers. An Enterprise application
Amazon EC2 XenApp Scalability Analysis
WHITE PAPER Citrix XenApp Amazon EC2 XenApp Scalability Analysis www.citrix.com Table of Contents Introduction...3 Results Summary...3 Detailed Results...4 Methods of Determining Results...4 Amazon EC2
Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework
Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Many corporations and Independent Software Vendors considering cloud computing adoption face a similar challenge: how should
SIDN Server Measurements
SIDN Server Measurements Yuri Schaeffer 1, NLnet Labs NLnet Labs document 2010-003 July 19, 2010 1 Introduction For future capacity planning SIDN would like to have an insight on the required resources
Managing your Red Hat Enterprise Linux guests with RHN Satellite
Managing your Red Hat Enterprise Linux guests with RHN Satellite Matthew Davis, Level 1 Production Support Manager, Red Hat Brad Hinson, Sr. Support Engineer Lead System z, Red Hat Mark Spencer, Sr. Solutions
Performance brief for Oracle Enterprise Financial Management 8.9 (Order-to-Cash Counter Sales) on HP Integrity BL870c server blades
Performance brief for Oracle Enterprise Financial Management 8.9 (Order-to-Cash Counter Sales) on HP Integrity BL870c server blades Executive summary... 2 Test methodology... 3 Inventory... 4 Billing...
Performance Modeling for Web based J2EE and.net Applications
Performance Modeling for Web based J2EE and.net Applications Shankar Kambhampaty, and Venkata Srinivas Modali Abstract When architecting an application, key nonfunctional requirements such as performance,
A Performance Engineering Story
CMG'09 A Performance Engineering Story with Database Monitoring Alexander Podelko [email protected] 1 Abstract: This presentation describes a performance engineering project in chronological order. The
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
Qualogy 2014-08-29 M. Schildmeijer. Whitepaper Oracle Exalogic FMW Optimization
Qualogy 2014-08-29 M. Schildmeijer Whitepaper Oracle Exalogic FMW Optimization 1 Inhoudsopgave 1. Preface... 3 2. WebLogic Domain Level... 4 2.1 Domain Enhancements... 4 2.2 JDBC SDP enhancement... 4 2.3
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Benchmarking Guide. Performance. BlackBerry Enterprise Server for Microsoft Exchange. Version: 5.0 Service Pack: 4
BlackBerry Enterprise Server for Microsoft Exchange Version: 5.0 Service Pack: 4 Performance Benchmarking Guide Published: 2015-01-13 SWD-20150113132750479 Contents 1 BlackBerry Enterprise Server for Microsoft
Cognos8 Deployment Best Practices for Performance/Scalability. Barnaby Cole Practice Lead, Technical Services
Cognos8 Deployment Best Practices for Performance/Scalability Barnaby Cole Practice Lead, Technical Services Agenda > Cognos 8 Architecture Overview > Cognos 8 Components > Load Balancing > Deployment
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
No.1 IT Online training institute from Hyderabad Email: [email protected] URL: sriramtechnologies.com
I. Basics 1. What is Application Server 2. The need for an Application Server 3. Java Application Solution Architecture 4. 3-tier architecture 5. Various commercial products in 3-tiers 6. The logic behind
Test Run Analysis Interpretation (AI) Made Easy with OpenLoad
Test Run Analysis Interpretation (AI) Made Easy with OpenLoad OpenDemand Systems, Inc. Abstract / Executive Summary As Web applications and services become more complex, it becomes increasingly difficult
Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0
Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without
Performance White Paper
Sitecore Experience Platform 8.1 Performance White Paper Rev: March 11, 2016 Sitecore Experience Platform 8.1 Performance White Paper Sitecore Experience Platform 8.1 Table of contents Table of contents...
HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
Application. Performance Testing
Application Performance Testing www.mohandespishegan.com شرکت مهندش پیشگان آزمون افسار یاش Performance Testing March 2015 1 TOC Software performance engineering Performance testing terminology Performance
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation report prepared under contract with Lenovo Executive Summary Love it or hate it, businesses rely on email. It
MID-TIER DEPLOYMENT KB
MID-TIER DEPLOYMENT KB Author: BMC Software, Inc. Date: 23 Dec 2011 PAGE 1 OF 16 23/12/2011 Table of Contents 1. Overview 3 2. Sizing guidelines 3 3. Virtual Environment Notes 4 4. Physical Environment
