BEA AquaLogic Data Services Platform Performance
|
|
- Justina Clarke
- 7 years ago
- Views:
Transcription
1 BEA White Paper BEA AquaLogic Data Services Platform Performance A benchmark-based case study.
2 Copyright Copyright BEA Systems, Inc. All Rights Reserved. Restricted Rights Legend This software is protected by copyright, and may be protected by patent laws. No copying or other use of this software is permitted unless you have entered into a license agreement with BEA authorizing such use. This document is protected by copyright and may not be copied photocopied, reproduced, translated, or reduced to any electronic medium or machine readable form, in whole or in part, without prior consent, in writing, from BEA Systems, Inc. Information in this document is subject to change without notice and does not represent a commitment on the part of BEA Systems. THE DOCUMENTATION IS PROVIDED AS IS WITHOUT WARRANTY OF ANY KIND INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. FURTHER, BEA SYSTEMS DOES NOT WARRANT, GUARANTEE, OR MAKE ANY REPRESENTATIONS REGARDING THE USE, OR THE RESULTS OF THE USE, OF THE DOCUMENT IN TERMS OF CORRECTNESS, ACCURACY, RELIABILITY, OR OTHERWISE. Trademarks and Service Marks Copyright BEA Systems, Inc. All Rights Reserved. BEA, BEA JRockit, BEA WebLogic Portal, BEA WebLogic Server, BEA WebLogic Workshop, Built on BEA, Jolt, JoltBeans, SteelThread, Top End, Tuxedo, and WebLogic are registered trademarks of BEA Systems, Inc. BEA AquaLogic, BEA AquaLogic Data Services Platform, BEA AquaLogic Enterprise Security, BEA AquaLogic Service Bus, BEA AquaLogic Service Registry, BEA Builder, BEA Campaign Manager for WebLogic, BEA elink, BEA Liquid Data for WebLogic, BEA Manager, BEA MessageQ, BEA WebLogic Commerce Server, BEA WebLogic Communications Platform, BEA WebLogic Enterprise, BEA WebLogic Enterprise Platform, BEA WebLogic Enterprise Security, BEA WebLogic Express, BEA WebLogic Integration, BEA WebLogic Java Adapter for Mainframe, BEA WebLogic JDriver, BEA WebLogic Log Central, BEA WebLogic Network Gatekeeper, BEA WebLogic Personalization Server, BEA WebLogic Personal Messaging API, BEA WebLogic Platform, BEA WebLogic Portlets for Groupware Integration, BEA WebLogic Server Process Edition, BEA WebLogic SIP Server, BEA WebLogic WorkGroup Edition, Dev2Dev, Liquid Computing, and Think Liquid are trademarks of BEA Systems, Inc. BEA Mission Critical Support, BEA Mission Critical Support Continuum, and BEA SOA Self Assessment are service marks of BEA Systems, Inc. All other names and marks are property of their respective owners. CWP1079E1105-1A
3 Table of Contents Executive Summary Introduction Query Processing Architecture Data Service Performance Factors Retail Customer Self-Service Benchmark Benchmark queries Configuration Data characteristics Benchmark results Query-only workload Cluster benchmark results Conclusion Addendum A Large data transfer within BEA AquaLogic Data Services Platform About BEA
4 Executive Summary This white paper reviews the architecture, query processing techniques, and key performance factors associated with the BEA AquaLogic Data Services Platform Experimental results from an in-depth look at the performance of a sample application, based on a retail customer scenario, illustrate key system performance characteristics and demonstrate how those undertaking a BEA AquaLogic Data Services Platform project might conduct such a study. The sample application studied involves a mix of read and write services that integrate data from, and apply changes to, both relational and Web service-based data sources. For some service calls, up to five heterogeneous sources provided the data to satisfy a single service request. Factors varied include the specific mix of requests comprising a workload, workload intensity (number of requesting clients), and size of the cluster on which the system was deployed. For the sample application studied and the variations explored, the key findings can be summarized as follows: In a single server environment, BEA AquaLogic Data Services Platform scales linearly and gracefully as the number of concurrent clients is increased. Measured service times are sufficiently small (sub-second) to provide responsiveness for interactive applications throughout the system loading range explored. The overall throughput scales linearly, as expected, as the offered load on the system is increased. In a clustered environment, the overall service capacity (throughput) of BEA AquaLogic Data Services Platform scales linearly as hardware is added to the cluster on which the platform is deployed. Clustering is demonstrated to be a very effective scaling technique for BEA AquaLogic Data Services Platform applications of this sort. These conclusions are seen to hold for both read-only workloads and read-write workload mixes. In addition to examining the multi-user performance of a BEA AquaLogic Data Services Platform system, this paper provides a first look at the performance characteristics of the server-side streaming APIs now provided in the system. Two sets of results are reported, one involving all instances obtainable from the five-source integrated data service used in the multi-user study (to demonstrate a sample integrate-and-stream use case) and another involving data from just one underlying source (to demonstrate a less complex streaming use case). BEA AquaLogic Data Services Platform is shown to stream effectively for both. Full materialization of data is avoided for both use cases, allowing large data volumes to be integrated and streamed using a modest, result-size-independent memory configuration; the system is shown to perform well for both simple and complex streaming use cases. Introduction This paper provides a brief overview of the architecture and query processing techniques used in the BEA AquaLogic Data Services Platform It explains the key performance factors affecting applications that use data services-based integration technology. The paper then describes a benchmarking exercise undertaken with BEA AquaLogic Data Services Platform This benchmark, based on a prototypical retail customer application scenario, was designed to be 5
5 simple to understand yet complex enough to be indicative of the applications and heterogeneous data source scenarios for which the platform is intended. Based on the benchmark, the paper presents performance data that addresses a number of common questions about the platform. This information is intended to help identify key factors to be considered in projects using the BEA AquaLogic Data Services Platform Query Processing Architecture Figure 1 shows an overview of the platform architecture. The BEA AquaLogic Data Services Platform server (in the center of the figure) runs as a BEA WebLogic Server application. In most cases, client applications request the platform server to execute one of a collection of previously prepared data service functions. Ad hoc queries are also supported. Client requests can come through any of the platform s APIs, including a Java mediator API, Web services, a BEA WebLogic Workshop Control, or a JDBC/SQL92 API. Data services functions and ad hoc queries are written in the XQuery language against an integrated XML-based model of the enterprise s diverse data sources. In a typical scenario, a data services architect predefines this model to provide a clean, unified set of data services that integrate with the source data. The components of this data services layer are expressed in XQuery, just as SQL is used to define views in relational database systems. BEA AquaLogic Data Services Platform provides graphical data services tooling to help data architects and developers quickly construct and test their XQuery data services and queries. The platform also supports updates through all but its JDBC/SQL92 API. Portal Web Application Business Process Figure 1 Architectural overview of BEA AquaLogic Data Services Platform. Query Request Client API XML Result Java Mediator Web Services Workshop Control, JDBC BEA AquaLogic Data Services Platform Data Service Design Tools Service Modeling, Query Editing Sales Data Service Distributed Query Processor Support Data Service Data Services Customer Data Service Cache Administration Console Caching, Security, Management JDBC RDBMS JDBC DW/DM Web Services Business Partner XML Files File System Inflight XML Messages Custom Functions Other Packaged Apps BEA Web Service Adapters Custom Apps Legacy Apps 6
6 Figure 2 gives an overview of the BEA AquaLogic Data Services Platform query processing architecture. At query compilation time, the distributed query processor translates an XQuery request into an optimized distributed query plan. This involves parsing and analyzing the query, unfolding or inlining the definitions of any views used in it, and then optimizing it to determine the best set of underlying data source requests and other intermediate processing steps to compute the query result. The query processor then executes the sequence of resulting queries and/or function calls against the relevant data sources. At query execution time, the server combines the results returned from the underlying data sources to form a single XML result that is returned to the client application through whatever API the request came in from. The BEA AquaLogic Data Services Platform query processor is designed to efficiently run queries that span a set of diverse data sources. When compared to hand-coding a given query, the platform aims to add minimal overhead in fact, using distributed query processing techniques and efficient join methods, it aims to outperform what an average developer might do in a hand-coded application. The following is a synopsis of some key query processing technologies employed by the BEA AquaLogic Data Services Platform: SQL pushdown. When accessing data from relational databases, BEA AquaLogic Data Services Platform seeks to offload (or push down ) as much query processing as possible to each of the underlying relational database systems. This gives each individual system the opportunity to optimize its portion of the query, while limiting the amount of intermediate data returned from each source. Query operations involving string searches, comparison operations, local joins, sorting, aggregate functions, and grouping operations are all pushed down when possible. XQuery Query XQuery Results Figure 2 Query processing overview for BEA AquaLogic Data Services Platform. XQuery Compilation Query Parsing/Analysis Source Description Query Rewrite/Optimization Source Schemas (.XSD) Function Description Runtime Query Plan Generation XQuery Execution Runtime Query Engine Functions Queries & Fetch Requests to Sources Results From Sources 7
7 Distributed join methods. When a query condition involves data drawn from two or more sources, BEA AquaLogic Data Services Platform employs one or more distributed join processing techniques batched parameter passing join, index join, and nested loops join. In a batched parameter passing join, BEA AquaLogic Data Services Platform passes join values from one data source to another (the latter must be a relational source) in batches, reducing by an order of magnitude or more the number of SQL calls that would otherwise be needed for the join. This join method is preferred when the right-hand side of a join is a relational data source. In an index join, the platform fetches one of the join targets in its entirety into the BEA AquaLogic Data Services Platform server in one call, indexes the resulting data, and then performs the join locally. Compared to parameter passing, this technique involves fewer remote calls but requires much more server memory; it may also involve more data movement. This method is used when the right-hand side of a join is a non-sql source but the join predicate is amenable to indexing. In a nested loops join, BEA AquaLogic Data Services Platform fetches one join target into the server, then performs the join locally by looping over this materialized target while scanning the other join target. This technique, the most general, is the method of last resort, used for non-sql sources when the join predicate is not amenable to indexing. Slow data source handling. The platform s runtime query engine can employ parallelism to reduce latency for queries involving slow functional data sources such as remote Web services or slow application views over legacy back-end systems. The server includes an asynchronous request manager that manages a thread pool used to execute function calls asynchronously on a separate thread, reducing idle time in the engine and thereby increasing query performance. The server will create query plans to call multiple slow functions in parallel, when guided to do so by the inclusion of async( ) functions around expressions in the query to be executed. The server also provides a timeout( ) function that can be used to limit the amount of time that the server will spend processing a given XQuery expression before failing over to a backup expression (e.g., in the event of an unresponsive primary data source). Data service result caching. The BEA AquaLogic Data Services Platform can be configured, on a per-query basis, to cache results for calls to any of its data service functions. When this is done, the server stores the function results in a cache database and indexes them by function name and associated parameter values. Data service function caching is appropriate for functions that are frequently invoked and for which cached results (current within a specified time-to-live) are acceptable. On invocation of a function whose results are cached, the server can fetch and return the results quickly. This places a lighter load on the back-end systems involved in the query and provides increased query responsiveness. Typical uses of data service function caching would be to cache results from a high-latency Web service call or a costly data integration function. Server clustering. The BEA AquaLogic Data Services Platform is deployed on the BEA WebLogic Application Server infrastructure as a J2EE component. As a result, it is easy to scale the platform using the Application Server Clustering features. 8
8 Data Service Performance Factors Many factors, including the following, influence overall server performance of the BEA AquaLogic Data Services Platform for a given application: The complexity of the data services and queries used by the application The percentage of queries versus update operations performed against the data sources The nature of the underlying data sources, including: The kind of sources (relational or functional) How expensive it is to call them (e.g., calling a nearby database versus a Web service somewhere across the globe) How heavily loaded they are with requests from other applications that rely on the same data sources (e.g., requests from users of a packaged application that is exposed to the platform as a functional data source) The volume of source data touched by the request, and the fraction of it that must be moved across the network or spooled to/from disk to obtain an answer The expected number of concurrent users of the deployed application and the resulting concurrent query mix. Some of these factors are under control of the application developer, some under control of the infrastructure for the application and its server cluster, and some inherent in the IT environment outside control of anything having to do with the platform. For example, if all data sources for a given application are relational databases, and all are heavily loaded due to other demands on the same servers, then the performance of the BEA AquaLogic Data Services Platform will be predetermined largely by the environment. For any given request, if the server spends most of its time waiting for back-end sources to respond with data, little (aside from enabling data service result caching) can be done at the platform level to tune performance or help the system scale. Put simply, what happens from an application performance standpoint depends, to a large extent, on the degree to which requests are back-end-server-bound versus middleware-server-bound. This in turn depends on how pushable the requests are to the data sources (which in turn, for a relational source, depends on a combination of the request s complexity and the specific dialect of SQL that the source in question speaks), how expensive the pushed subqueries are, how fast the data sources respond, how much data is retrieved from the sources, how much of the query processing work must be performed in the server, and how much pressure the platform experiences from concurrent requests. Because of the complexity of the data services performance question, the best approach is to understand the forces and factors at work, enabling some informed guesswork about what might happen in a given application scenario. Of course, experimenting with the application itself in a Proof of Concept (POC) is highly recommended. The information provided here is intended to help in choosing an appropriate scope for any given POC based on reasoning about the key issues and therefore about what open performance questions may need to be investigated. 9
9 Retail Customer Self-Service Benchmark To make the theory real, consider a sample application, one for which one can measure the actual performance of the BEA AquaLogic Data Services Platform for clients querying data sources and for a mixed workload of client queries and updates. Any given application will be different from this example, of course, so actual results will vary but it does provide two reasonable data sets that illustrate the effects of some of the factors discussed. Moreover, the data service functions (queries), updates, and data sources in this benchmark application were chosen to be typical of how BEA customers are likely to use BEA AquaLogic Data Services Platform, at least in the near term that is, using the platform to create unified services that enable integrated data from databases and/or Web services to be accessed and updated in real time by SOA-based Web applications. A hypothetical retail customer self-service application, based directly on the sample application that ships with the BEA AquaLogic Data Services Platform, will be used for both scenarios of this benchmark (the Retail Customer Self-Service, or CSS, Benchmark). This application provides a Web-based customer self-service portal that shows a given customer s profile information, credit rating, orders from the hypothetical company s electronics and apparel division order management systems, registered credit cards, shipping addresses, and service cases. Figure 3 shows the overall structure of the benchmark. Figure 3 Retail CSS benchmark overview. Electronic Order Web Service Credit Rating Web Service Electronic Order DB Client BEA AquaLogic Data Services Platform Server Customer DB Apparel Order DB Services clustered on same physical machine Billing DB Service DB 10
10 The Retail CSS Benchmark application involves data spread across four relational data sources and two Web services. The four relational data sources: Customer database Contains customer profile data and shipping addresses Billing database Contains customers registered credit cards Service database Contains service cases Apparel order database Contains orders and line items for apparel division orders. The two Web services: 1. Electronics order Web service Returns orders with line items for electronics division orders. Uses a relational database (very similar to the apparel order database) to store order information 2. Credit rating Web service Returns customer credit rating information. Benchmark queries The Retail CSS Benchmark entails two scenarios. First is a query-only mix consisting of five typical data service requests for such an application. The queries are listed in Table 1. Table 1: Query functions. Benchmark Request Result Size Query Description (including data sources) Customer Profile 4K Return customer info with address and credit cards (2 RDBMS) Electronics Order Detail 3K Return order with line items plus billing and shipping info for an electronics order (2 RDBMSs, 1 Web service) Customer Credit Rating 1K Return customer info plus customer s credit rating(1 RDBMS, 1 Web service) Apparel Order Detail 3K Return order with line items plus billing and shipping info for an apparel order (3 RDBMSs) Login Customer View 4K Return all currently open orders and open cases for a given customer (4 RDBMSs, 1 Web service) The Customer Profile query retrieves the information needed to populate a my account information self-service Web page. The Electronics Order Detail and Apparel Order Detail queries retrieve order details (from a Web service in the electronics case and an RDBMS in the apparel case the only significant difference between them) plus associated address information. The Customer Credit Rating query combines a Web service call with two database calls, and the Login Customer View retrieves the information needed to populate a my open orders self-service Web page. The overall Retail CSS Benchmark workload contains an equal mix of all five queries, and the benchmark measures both the performance of the individual queries and the performance of the overall mix. As indicated in Table 1, the queries vary in terms of the number and nature of data sources that they access. The second scenario using the Retail CSS Benchmark incorporates writes to simulate a mixed workload environment. The same five read queries were invoked, making up 80% of the workload, with the remaining 20% distributed among the three new write operations listed at the end of Table 2. The requests within each of the two groups (80% reads, 20% updates) were evenly distributed within their respective groups. 11
11 The Customer Update operation retrieves a given customer s profile information and then submits a change of address. The New Apparel Order and New Electronics Order operations each add one new customer order containing between one and five line items. For each of the New Order operations, the order comes from a brand-new customer 20% of the time; in these cases, a new customer insertion is performed before new order creation. All updates and inserts were done using BEA AquaLogic Data Services Platform client SDO APIs. The New Electronics Order operation involves the use of a BEA AquaLogic Data Services Platform update override to translate order insertion into the appropriate Web service call. Each of these two scenarios will also be used in a clustering scalability test in which the number of BEA AquaLogic Data Services Platform instances is increased (with each instance on a separate server machine). Table 2: Query functions and update operations. Benchmark Request Result Size Query Description (including data sources) Customer Profile 4K Return customer info with address and credit cards (2 RDBMS) Electronics Order Detail 3K Return order with line items plus billing and shipping info for an electronics order (2 RDBMSs, 1 Web service) Customer Credit Rating 1K Return customer info plus customer s credit rating (1 RDBMS, 1 Web service) Apparel Order Detail 3K Return order with line items plus billing and shipping info for an apparel order (3 RDBMSs) Login Customer View 4K Return all currently open orders and open cases for a given customer (4 RDBMSs, 1 Web service) Customer Update 0K Update customer info (1 RDBMS) New Apparel Order 0K Insert new apparel order for a given customer (3 RDBMSs) New Electronics Order 0K Insert new electronics order for a given customer (2 RDBMSs, 1 Web service) Configuration The basic Retail CSS Benchmark was run on a network of machines, with three machines hosting relational databases, one hosting the two Web services, one hosting the BEA AquaLogic Data Services Platform server, and one generating a synthetic concurrent client request workload (as shown in Figure 3). Table 3 lists the details of the machines and the software used. The BEA AquaLogic Data Services Platform server configuration itself employed one BEA AquaLogic Data Services Platform server instance on its own server machine. The BEA WebLogic Server instance used BEA JRockit v with both minimum and maximum memory set to 1GB, plus a default BEA WebLogic Server configuration with the number of concurrent client request threads set to 30. The JDBC configuration for the BEA AquaLogic Data Services Platform server machine used the Oracle Thin Driver, the MS SQL driver, and the Sybase driver from BEA WebLogic Server; for each, the JDBC connection pool size was set to 30. The client workload was generated using a custom built BEA AquaLogic Data Services Platform workload generation tool developed for AquaLogic Data Services Platform QA use. 12
12 Table 3: Machine configurations. Machine Function Software Version DB Server Machine Electronics Order DB Oracle 9.2 Win2K, 2 CPU, 2 GHz, 2GB mem. Apparel Order DB Oracle 9.2 Win2K, 2 CPU, 2 GHz, 2GB mem. Service DB MS SQL 2000 Win2K, 2 CPU, 2 GHz, 2GB mem. Customer DB Sybase Win2K, 2 CPU, 2 GHz, 2GB mem. Billing DB Oracle 9.2 Win2K, 2 CPU, 2 GHz, 2GB mem. Electronics Order Web Service BEA WebLogic Server 8.1 SP4 Win2K, 2 CPU, 2 GHz, 2GB mem. Credit Rating Web Service Same as above Same as above BEA AquaLogic Data Services BEA AquaLogic Data Services Platform DSP Redhat Linux, 2 CPU, 2 GHz, 2GB mem. Platform Data Integration Server Client Workload Generation BEA AquaLogic Data Services Platform QA test driver Win2K, 2 CPU, 2 GHz, 2GB mem. Table 4 summarizes the basic characteristics of the databases used in the Retail CSS Benchmark test runs. Data characteristics Each database table has a primary key; secondary indices were built on the foreign keys directly used for queries. Table 4: Data characteristics. Name Customer Data Billing Data Apparel Order Data Electronics Order Data Service Case Data Database/Version 10K customers, 20K addresses 20K credit cards 100K orders, 300K line items 100K orders, 300K line items 3K service cases In addition, an extended Retail CSS Benchmark was run to show the clustering characteristics of BEA AquaLogic Data Services Platform. That configuration employed one, two, and three BEA AquaLogic Data Services Platform server instances, each of which were hosted on a separate server machine of like kind. Benchmark results To investigate the performance of the BEA AquaLogic Data Services Platform under a realistic load, the Retail CSS Benchmark queries and updates were run from a variable number of clients. Client think times varied from 2 to 18 seconds (averaging 10 seconds) between requests. The number of concurrently active clients was experimentally varied from 1 to Each client waited for a random think time, issued a request drawn randomly from the overall Retail CSS request distribution, and then waited to receive the platform s response before repeating the think and request process. (Note that once the client count exceeded 800, the client-generation program was split between two JVMs to ensure scalable workload generation.) The tests were run long enough to generate stable performance measurements i.e., long enough to avoid startup/shutdown edge effects for the concurrent workload. 13
13 Query-only workload Table 5 presents the response time measurements (in seconds) obtained for the different queries under the different levels of concurrency tested; it also presents the overall average response time for the workload (in seconds) and the corresponding throughput results (in queries/second). Table 5: Query-only workload data. Query Number of Clients Apparel Order Detail Electronics Order Detail Customer Profile Customer Credit Rating Login Customer View Average Response Time (sec.) Throughput (queries/sec.) There are several things to note regarding the individual query results. First, the BEA AquaLogic Data Services Platform was clearly able to run all queries quite quickly, providing more than adequate responsiveness for a Web application of this sort. All queries ran in sub-second response times, with an almost linear increase up to 1600 clients. Second, individual query performance varied with the number and nature of data sources accessed, as is to be expected. For example, the Login Customer View query was by far the slowest, particularly at light loads where its higher inherent latency was the most apparent. This query involves accessing information from four relational data sources as well as talking to an application through a Web service (with the underlying application in turn accessing data from its internal RDBMS on a different server). Contrast this with the Apparel Order Detail query, where the BEA AquaLogic Data Services Platform talked directly to three RDBMS data sources. The Customer Profile and Customer Credit Rating queries were the lightest, in terms of the amount of work involved, and thus had the best performance. 14
14 The two graphs in Figure 4 show how the overall average response time and corresponding throughput results varied with the load placed on the system. It is evident from these results that the BEA AquaLogic Data Services Platform was able to scale gracefully for this application. Response time is seen to increase linearly as the load increases. Throughput also increases linearly and does not level off, indicating that an even higher load would be possible were overall system latency not affecting response times. Mixed workload To understand the performance of the BEA AquaLogic Data Services Platform server in a mixed workload environment, three additional write operations were added to the basic Retail CSS Benchmark mix. The same basic configuration parameters were used (i.e., random think time, number of concurrent clients, removal of edge effects, and so on). The queries and update operations were randomly selected with a distribution of 80/20, respectively; each specific query or update was then selected randomly from within its request group. Table 6 depicts the response time per request and the average response time (in seconds) overall. The overall throughput is also listed (as requests/second). When comparing the first five queries in Table 6 to the response times in Table 5, all of the queries took longer; this became more prevalent as the client load increased. This behavior is expected: the added update operations entail writing data to disk, which increases system load and decreases responsiveness of the overall system. The Login Customer View query has a response time approximately equivalent to that of the two Add Order operations at lower system loads, but eventually the Add Order operation times begin to exceed those of the query. This is expected behavior even though the Login Customer View query is accessing five data sources and retrieving more data. The Add Apparel Order operation requires from two to six insert statements, and Add Electronics Order requires one Web services call plus two to six relational insert statements. In addition, 20% of new orders add a new customer, requiring an insert operation into two additional data sources and two or three added network hops. In contrast, the smaller update operation, Customer Update, updates a single row of a single table in one data source and thus has a much smaller overall response time. The response time for this small update request is comparable to that of the Customer Profile query that reads from two data sources. Response Times Figure 4 Overall average response time and corresponding throughput. Response Time (sec.) Number of clients Throughput Queries/Second Number of clients 15
15 Table 6: Query and update workload data. Operation Number of Clients Apparel Order Detail Electronics Order Detail Customer Profile Customer Credit Rating Login Customer View Add Apparel Order Add Electronics Order Customer Update Overall Average Response Time (sec.) Query Average Response Time (sec.) Update Average Response time (sec.) Throughput (requests/sec.) Note 1. The Apparel Order Detail query is faster than the Electronics Order Detail query as expected, since reading from a RDBMS is typically faster than reading from a Web service. However, the inverse is not necessarily true when dealing with update operations: the Add Apparel Order update operation is slower than the Add Electronics Order update operation. The Web Service update operation employs an update override that was hand-coded for the specific operation, whereas the RDBMS update operation uses the automatic update decomposition framework to propagate the update. The overall system throughput increased somewhat linearly with the additional update operations; it is quite close to the throughput of the query workload. Note of course that the overall throughput would change depending on the percentage of writes in a real system; the benchmark designers chose a hypothetical system with 20% writes. 2 Figure 5 shows how overall average response times and corresponding throughput results varied with the load placed on the system by additional clients. The BEA AquaLogic Data Services Platform was able to scale gracefully in a mixed workload application just as for a read-only workload application. Figure 5 Overall average response times and corresponding throughput results. Response Time (sec.) Response Times Number of clients Operations/Second Throughput Number of clients 2. The choice of 20% writes corresponds to the percentage of writes used in the TPC-W Benchmark. 16
16 Figure 6 compares the response time from the query workload to that of the mixed workload. Here, with few clients, the addition of writes to the workload increases the overall response time at much the same rate; at the end, however, the response time increases sooner and the curve is steeper. Thus the overall responsiveness of the system decreases as the percentage of update operations increases; this is due to increased latency in the data sources for disk I/Os. Cluster benchmark results To investigate scalability, the number of BEA AquaLogic Data Services Platform instances was increased from one to three. Each instance was deployed on its own server machine; all machines were of like speed and equal main memory. The number of clients used was set at 1600 times the number of platform instances the peak load from the previous single-instance numbers. Query-only workload Table 7 depicts overall throughput (in queries/second) for the query-only workload for different BEA AquaLogic Data Services Platform cluster configurations. Table 7: Cluster query-only workload data. Query Workload (1600 clients * K instances) 1 instance 2 instances 3 instances Total Throughput Response Times Figure 6 Response time from query workload to mixed workload. Response Time (sec.) read read/update Number of clients 17
17 The total overall system throughput results increased linearly when adding new BEA AquaLogic Data Services Platform instances to the cluster, 3 as expected in a well-behaved distributed system. As shown in Figure 7, as new BEA AquaLogic Data Services Platform instances were added, the throughput increased. Mixed workload Table 8 presents overall throughput results (in requests per second) for the mixed workload case. The cluster overall throughput results again increase linearly, as expected. Again, the difference in throughput for query-only workload versus mixed workload seems to be negligible. Table 8: Cluster mixed workload data. Mixed Workload (1600 clients * K instances) 1 instance 2 instances 3 instances Total Throughput Cluster Query Workload Throughput 600 Figure 7 Cluster query workload throughput. Queries/Second Number of DSP Instances Cluster Mixed Workload Throughput Figure 8 Cluster mixed workload throughput. Operations/Second Number of DSP Instances 3. We were limited by hardware to further test the scalability of a DSP cluster. 18
18 Conclusion This paper has looked inside BEA AquaLogic Data Services Platform in order to understand its architectural and performance characteristics. In particular, it has discussed the techniques the platform uses to efficiently process requests that integrate data from diverse distributed data sources. A multi-user benchmark based on a prototypical retail customer application scenario was accompanied by results gained from running BEA AquaLogic Data Services Platform in a query and mixed-workload environment. The benchmark results show that BEA AquaLogic Data Services Platform provides fast query response times for such scenarios and that its performance scales well as user load increases. Similar results were obtained for a mix of both queries and updates. Scaling to a cluster of BEA AquaLogic Data Services Platform servers was also investigated; the capacity of the system to serve client requests scaled nearly linearly. The benchmark results for all the scenarios described show that BEA AquaLogic Data Services Platform can be expected to perform well in realistic customer scenarios. Addendum A Large data transfer within BEA AquaLogic Data Services Platform The BEA AquaLogic Data Services Platform has additional functionality to handle large data sets for various scenarios. Large data sets may result from data aggregated from multiple sources, from a data source with no filtering predicate (i.e., one that extracts the whole data set), or from a single physical data source built exclusively for extracting from an operational mainframe system. The throughput results in such cases are determined largely by the same factors described earlier, namely: The complexity of the data services and queries used by the application The nature of the underlying data sources, including The kind of sources they are (relational or functional) How expensive it is to call them (e.g., calling a nearby database versus a Web service somewhere across the globe) How heavily loaded they are with requests from other applications that rely on the same data sources (e.g., requests from users of a packaged application that is exposed to the platform as a functional data source) The expected number of concurrent users of the deployed application and the resulting concurrent query mix. To show the range of throughput numbers that can be expected, two very different data services were used. The first aggregates information from multiple data sources selectively, for a single view of X style portal. In particular, the same Login Customer View used in the preceding study is used for this experiment. The BEA AquaLogic Data Services Platform server streaming API is accessing this view to select all instances from this data service e.g., for archiving. The returned data is not processed; only streaming performance is explored. (Typically, the returned data would be searched and segmented and/or written to a file.) 19
19 The second data service presents data selected from a single relational data source with one column of 2000 bytes. This exceedingly simple data service was used to create a situation in which performance was limited by neither a slow back-end nor complex integration, thereby permitting exploration of the rate at which the BEA AquaLogic Data Services Platform can stream data from a single back-end data source. The elapsed time and the amount of data in megabytes is listed in Table 9. The maximum data streaming rate reached with no processing was approximately 21 MB per second. There is no result shown for the largest result size in the CustView case because the benchmark data set did not contain enough data to produce a result of that size. Table 9: Data streaming from a single back-end source. MB Data Transferred Aggregated CustView N/A Pure Streaming The results, shown in Figure 9, indicate that the overall throughput for streaming data in BEA AquaLogic Data Services Platform has a very wide range and is indeed dependent, as expected, on the factors listed above. BEA AquaLogic Data Services Platform is able to stream data effectively in both use cases, with overall performance depending on the complexity of the service from which data is streamed. Full materialization of data is avoided in both cases, allowing large data volumes to be integrated and streamed using a result-sizeindependent memory configuration. Throughput (logarithmic scale) Figure Overall throughput for streaming data. 10 MB Data/Sec. 1 aggregated CustView pure streaming MB Data 20
20 About BEA BEA Systems, Inc. (NASDAQ: BEAS) is a world leader in enterprise infrastructure software, providing standards-based platforms to accelerate the secure flow of information and services. BEA product lines WebLogic, Tuxedo, JRockit, and the new AquaLogic family of Service Infrastructure help customers reduce IT complexity and successfully deploy Service-Oriented Architectures to improve business agility and efficiency. For more information please visit bea.com. 21
21 BEA Systems, Inc North First Street San Jose, CA bea.com CWP1079E1105-1A
Portlets for Groupware Integration
BEAWebLogic Portlets for Groupware Integration Supported Configurations Guide Version 3.0 Revised: October 2006 Copyright Copyright 1995-2006 BEA Systems, Inc. All Rights Reserved. Restricted Rights Legend
More informationBEA AquaLogic Integrator Agile integration for the Enterprise Build, Connect, Re-use
Product Data Sheet BEA AquaLogic Integrator Agile integration for the Enterprise Build, Connect, Re-use BEA AquaLogic Integrator delivers the best way for IT to integrate, deploy, connect and manage process-driven
More informationPortlets for Groupware Integration
BEAWebLogic Portlets for Groupware Integration Installation Guide Version 8.1 SP5 Document Revised: October 2005 Copyright Copyright 1995-2005 BEA Systems, Inc. All Rights Reserved. Restricted Rights Legend
More informationA BEA Enterprise Architecture Guide
BEA White Paper A BEA Enterprise Architecture Guide Creating SOA from a Monolithic Portal Environment Copyright Copyright 1995-2006 BEA Systems, Inc. All Rights Reserved. Restricted Rights Legend This
More informationEnterprise Java Virtualization Understanding the TCO Implications: An Executive Brief
Enterprise Java Virtualization Understanding the TCO Implications: An Executive Brief Copyright Copyright 1995 2008 BEA Systems, Inc. All Rights Reserved. Restricted Rights Legend This software is protected
More informationBEA WebLogic Operations Control: Application Virtualization for Enterprise Java
BEA WebLogic Operations Control: Application Virtualization for Enterprise Java Copyright Copyright 1995-2008 BEA Systems, Inc. All Rights Reserved. Restricted Rights Legend This software is protected
More informationBEAWebLogic. Portal. WebLogic Portlets for SAP Installation Guide
BEAWebLogic Portal WebLogic Portlets for SAP Installation Guide Version 8.1 with Service Pack 4 (SAP Portlets Version 1.1) Document Revised: September 2004 Copyright Copyright 2004-2005 BEA Systems, Inc.
More informationBEA AquaLogic Service Bus and WebSphere MQ in Service-Oriented Architectures
BEA White Paper BEA AquaLogic Service Bus and WebSphere MQ in Service-Oriented Architectures Integrating a Clustered BEA AquaLogic Service Bus Domain with a Clustered IBM WebSphere MQ Copyright Copyright
More informationBEA White Paper. Adaptive Memory Management for Virtualized Java Environments
White Paper Adaptive Memory Management for Virtualized Java Environments Copyright Copyright 1995 2007 Systems, Inc. All Rights Reserved. Restricted Rights Legend This software is protected by copyright,
More informationAn Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide
Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.
More informationCondusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%
openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive
More informationData Integrator Performance Optimization Guide
Data Integrator Performance Optimization Guide Data Integrator 11.7.2 for Windows and UNIX Patents Trademarks Copyright Third-party contributors Business Objects owns the following
More informationEXRT: Towards a Simple Benchmark for XML Readiness Testing. Michael Carey, Ling Ling, Matthias Nicola *, and Lin Shao UC Irvine * IBM Corporation
EXRT: Towards a Simple Benchmark for XML Readiness Testing Michael Carey, Ling Ling, Matthias Nicola *, and Lin Shao UC Irvine * IBM Corporation TPCTC 2010 Singapore XML (in the Enterprise) Early roots
More informationLiferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition
Liferay Portal Performance Benchmark Study of Liferay Portal Enterprise Edition Table of Contents Executive Summary... 3 Test Scenarios... 4 Benchmark Configuration and Methodology... 5 Environment Configuration...
More informationBenchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
More informationAn Oracle White Paper June 2012. High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database
An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct
More informationBenchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
More informationVirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5
Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.
More informationPerformance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
More informationIJCSI International Journal of Computer Science Issues, Vol. 8, Issue 6, No 3, November 2011 ISSN (Online): 1694-0814 www.ijcsi.
www.ijcsi.org 393 Reengineering multi tiered enterprise business applications for performance enhancement and reciprocal or rectangular hyperbolic relation of variation of data transportation time with
More informationHow To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)
WHITE PAPER Oracle NoSQL Database and SanDisk Offer Cost-Effective Extreme Performance for Big Data 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Abstract... 3 What Is Big Data?...
More informationCopyright. Restricted Rights Legend. Trademarks or Service Marks. Copyright 2003 BEA Systems, Inc. All Rights Reserved.
Version 8.1 SP4 December 2004 Copyright Copyright 2003 BEA Systems, Inc. All Rights Reserved. Restricted Rights Legend This software and documentation is subject to and made available only pursuant to
More informationPARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
More informationMDM Multidomain Edition (Version 9.6.0) For Microsoft SQL Server Performance Tuning
MDM Multidomain Edition (Version 9.6.0) For Microsoft SQL Server Performance Tuning 2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic,
More informationInformatica Master Data Management Multi Domain Hub API: Performance and Scalability Diagnostics Checklist
Informatica Master Data Management Multi Domain Hub API: Performance and Scalability Diagnostics Checklist 2012 Informatica Corporation. No part of this document may be reproduced or transmitted in any
More informationVirtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
More informationCapacity Planning Process Estimating the load Initial configuration
Capacity Planning Any data warehouse solution will grow over time, sometimes quite dramatically. It is essential that the components of the solution (hardware, software, and database) are capable of supporting
More informationBuilding Scalable Applications Using Microsoft Technologies
Building Scalable Applications Using Microsoft Technologies Padma Krishnan Senior Manager Introduction CIOs lay great emphasis on application scalability and performance and rightly so. As business grows,
More informationAzure Scalability Prescriptive Architecture using the Enzo Multitenant Framework
Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Many corporations and Independent Software Vendors considering cloud computing adoption face a similar challenge: how should
More informationChapter. Solve Performance Problems with FastSOA Patterns. The previous chapters described the FastSOA patterns at an architectural
Chapter 5 Solve Performance Problems with FastSOA Patterns The previous chapters described the FastSOA patterns at an architectural level. This chapter shows FastSOA mid-tier service and data caching architecture
More informationSQL Server Business Intelligence on HP ProLiant DL785 Server
SQL Server Business Intelligence on HP ProLiant DL785 Server By Ajay Goyal www.scalabilityexperts.com Mike Fitzner Hewlett Packard www.hp.com Recommendations presented in this document should be thoroughly
More informationCentralized Systems. A Centralized Computer System. Chapter 18: Database System Architectures
Chapter 18: Database System Architectures Centralized Systems! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types! Run on a single computer system and do
More informationPerformance Analysis of Web based Applications on Single and Multi Core Servers
Performance Analysis of Web based Applications on Single and Multi Core Servers Gitika Khare, Diptikant Pathy, Alpana Rajan, Alok Jain, Anil Rawat Raja Ramanna Centre for Advanced Technology Department
More informationAgility Database Scalability Testing
Agility Database Scalability Testing V1.6 November 11, 2012 Prepared by on behalf of Table of Contents 1 Introduction... 4 1.1 Brief... 4 2 Scope... 5 3 Test Approach... 6 4 Test environment setup... 7
More informationOWB Users, Enter The New ODI World
OWB Users, Enter The New ODI World Kulvinder Hari Oracle Introduction Oracle Data Integrator (ODI) is a best-of-breed data integration platform focused on fast bulk data movement and handling complex data
More informationPerformance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
More informationOptimizing Performance. Training Division New Delhi
Optimizing Performance Training Division New Delhi Performance tuning : Goals Minimize the response time for each query Maximize the throughput of the entire database server by minimizing network traffic,
More informationXTM Web 2.0 Enterprise Architecture Hardware Implementation Guidelines. A.Zydroń 18 April 2009. Page 1 of 12
XTM Web 2.0 Enterprise Architecture Hardware Implementation Guidelines A.Zydroń 18 April 2009 Page 1 of 12 1. Introduction...3 2. XTM Database...4 3. JVM and Tomcat considerations...5 4. XTM Engine...5
More informationAn Oracle White Paper Released October 2008
Performance and Scalability Benchmark for 10,000 users: Siebel CRM Release 8.0 Industry Applications on HP BL460c Servers running Red Hat Enterprise Linux 4.0 and Oracle 10gR2 DB on HP BL680C An Oracle
More informationConfiguring Apache Derby for Performance and Durability Olav Sandstå
Configuring Apache Derby for Performance and Durability Olav Sandstå Database Technology Group Sun Microsystems Trondheim, Norway Overview Background > Transactions, Failure Classes, Derby Architecture
More informationDelivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator
enterprise White Paper Delivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator Performance Test Results for Analytical (OLAP) and Transactional (OLTP) SQL Server 212 Loads Allon
More informationPerformance And Scalability In Oracle9i And SQL Server 2000
Performance And Scalability In Oracle9i And SQL Server 2000 Presented By : Phathisile Sibanda Supervisor : John Ebden 1 Presentation Overview Project Objectives Motivation -Why performance & Scalability
More informationDeploying Fusion Middleware in a 100% Virtual Environment Using OVM ANDY WEAVER FISHBOWL SOLUTIONS, INC.
Deploying Fusion Middleware in a 100% Virtual Environment Using OVM ANDY WEAVER FISHBOWL SOLUTIONS, INC. i Fishbowl Solutions Notice The information contained in this document represents the current view
More informationPUBLIC Performance Optimization Guide
SAP Data Services Document Version: 4.2 Support Package 6 (14.2.6.0) 2015-11-20 PUBLIC Content 1 Welcome to SAP Data Services....6 1.1 Welcome.... 6 1.2 Documentation set for SAP Data Services....6 1.3
More informationMaximizing Backup and Restore Performance of Large Databases
Maximizing Backup and Restore Performance of Large Databases - 1 - Forward (from Meta Group) Most companies critical data is being stored within relational databases. Over 90% of all mission critical systems,
More informationHighly Available Mobile Services Infrastructure Using Oracle Berkeley DB
Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides
More informationGraph Database Proof of Concept Report
Objectivity, Inc. Graph Database Proof of Concept Report Managing The Internet of Things Table of Contents Executive Summary 3 Background 3 Proof of Concept 4 Dataset 4 Process 4 Query Catalog 4 Environment
More informationAn Oracle White Paper Released April 2008
Performance and Scalability Benchmark: Siebel CRM Release 8.0 Industry Applications on HP BL460c Servers running Red Hat Enterprise Linux 4.0 and Oracle 10gR2 DB on HP BL460C An Oracle White Paper Released
More informationA Performance Engineering Story
CMG'09 A Performance Engineering Story with Database Monitoring Alexander Podelko apodelko@yahoo.com 1 Abstract: This presentation describes a performance engineering project in chronological order. The
More informationEMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi
EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either
More informationSAP Business Objects Business Intelligence platform Document Version: 4.1 Support Package 7 2015-11-24. Data Federation Administration Tool Guide
SAP Business Objects Business Intelligence platform Document Version: 4.1 Support Package 7 2015-11-24 Data Federation Administration Tool Guide Content 1 What's new in the.... 5 2 Introduction to administration
More informationPARALLELS CLOUD SERVER
PARALLELS CLOUD SERVER Performance and Scalability 1 Table of Contents Executive Summary... Error! Bookmark not defined. LAMP Stack Performance Evaluation... Error! Bookmark not defined. Background...
More informationhttp://support.oracle.com/
Oracle Primavera Contract Management 14.0 Sizing Guide October 2012 Legal Notices Oracle Primavera Oracle Primavera Contract Management 14.0 Sizing Guide Copyright 1997, 2012, Oracle and/or its affiliates.
More informationCase Study - I. Industry: Social Networking Website Technology : J2EE AJAX, Spring, MySQL, Weblogic, Windows Server 2008.
Case Study - I Industry: Social Networking Website Technology : J2EE AJAX, Spring, MySQL, Weblogic, Windows Server 2008 Challenges The scalability of the database servers to execute batch processes under
More informationPhire Architect Hardware and Software Requirements
Phire Architect Hardware and Software Requirements Copyright 2014, Phire. All rights reserved. The Programs (which include both the software and documentation) contain proprietary information; they are
More informationPerformance White Paper
Sitecore Experience Platform 8.1 Performance White Paper Rev: March 11, 2016 Sitecore Experience Platform 8.1 Performance White Paper Sitecore Experience Platform 8.1 Table of contents Table of contents...
More informationTier Architectures. Kathleen Durant CS 3200
Tier Architectures Kathleen Durant CS 3200 1 Supporting Architectures for DBMS Over the years there have been many different hardware configurations to support database systems Some are outdated others
More informationContents Introduction... 5 Deployment Considerations... 9 Deployment Architectures... 11
Oracle Primavera Contract Management 14.1 Sizing Guide July 2014 Contents Introduction... 5 Contract Management Database Server... 5 Requirements of the Contract Management Web and Application Servers...
More informationJReport Server Deployment Scenarios
JReport Server Deployment Scenarios Contents Introduction... 3 JReport Architecture... 4 JReport Server Integrated with a Web Application... 5 Scenario 1: Single Java EE Server with a Single Instance of
More informationILOG JRules Performance Analysis and Capacity Planning
ILOG JRules Performance Analysis and Capacity Planning Version 1. Last Modified: 25-9-31 Introduction JRules customers and evaluators often ask fundamental questions such as: How fast is the rule engine?,
More informationbe architected pool of servers reliability and
TECHNICAL WHITE PAPER GRIDSCALE DATABASE VIRTUALIZATION SOFTWARE FOR MICROSOFT SQL SERVER Typical enterprise applications are heavily reliant on the availability of data. Standard architectures of enterprise
More informationEvaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
More informationEnterprise Performance Tuning: Best Practices with SQL Server 2008 Analysis Services. By Ajay Goyal Consultant Scalability Experts, Inc.
Enterprise Performance Tuning: Best Practices with SQL Server 2008 Analysis Services By Ajay Goyal Consultant Scalability Experts, Inc. June 2009 Recommendations presented in this document should be thoroughly
More informationSystem Requirements Table of contents
Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5
More informationScaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
More informationCA IDMS Server r17. Product Overview. Business Value. Delivery Approach
PRODUCT sheet: CA IDMS SERVER r17 CA IDMS Server r17 CA IDMS Server helps enable secure, open access to CA IDMS mainframe data and applications from the Web, Web services, PCs and other distributed platforms.
More informationMS SQL Performance (Tuning) Best Practices:
MS SQL Performance (Tuning) Best Practices: 1. Don t share the SQL server hardware with other services If other workloads are running on the same server where SQL Server is running, memory and other hardware
More informationContents. 2. cttctx Performance Test Utility... 8. 3. Server Side Plug-In... 9. 4. Index... 11. www.faircom.com All Rights Reserved.
c-treeace Load Test c-treeace Load Test Contents 1. Performance Test Description... 1 1.1 Login Info... 2 1.2 Create Tables... 3 1.3 Run Test... 4 1.4 Last Run Threads... 5 1.5 Total Results History...
More informationORACLE OLAP. Oracle OLAP is embedded in the Oracle Database kernel and runs in the same database process
ORACLE OLAP KEY FEATURES AND BENEFITS FAST ANSWERS TO TOUGH QUESTIONS EASILY KEY FEATURES & BENEFITS World class analytic engine Superior query performance Simple SQL access to advanced analytics Enhanced
More informationWhite Paper February 2010. IBM InfoSphere DataStage Performance and Scalability Benchmark Whitepaper Data Warehousing Scenario
White Paper February 2010 IBM InfoSphere DataStage Performance and Scalability Benchmark Whitepaper Data Warehousing Scenario 2 Contents 5 Overview of InfoSphere DataStage 7 Benchmark Scenario Main Workload
More informationGetting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER
Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform The benefits
More informationBest Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009
More informationBEAWebLogic. Server. Configuring and Managing WebLogic Server
BEAWebLogic Server Configuring and Managing WebLogic Server Version 8.1 Revised: June 28, 2006 Copyright Copyright 2004-2005 BEA Systems, Inc. All Rights Reserved. Restricted Rights Legend This software
More informationBy the Citrix Publications Department. Citrix Systems, Inc.
Licensing: Planning Your Deployment By the Citrix Publications Department Citrix Systems, Inc. Notice The information in this publication is subject to change without notice. THIS PUBLICATION IS PROVIDED
More informationGradient An EII Solution From Infosys
Gradient An EII Solution From Infosys Keywords: Grid, Enterprise Integration, EII Introduction New arrays of business are emerging that require cross-functional data in near real-time. Examples of such
More informationOracle Utilities Mobile Workforce Management Benchmark
An Oracle White Paper November 2012 Oracle Utilities Mobile Workforce Management Benchmark Demonstrates Superior Scalability for Large Field Service Organizations Introduction Large utility field service
More informationLeveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments
Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,
More informationTop 10 Performance Tips for OBI-EE
Top 10 Performance Tips for OBI-EE Narasimha Rao Madhuvarsu L V Bharath Terala October 2011 Apps Associates LLC Boston New York Atlanta Germany India Premier IT Professional Service and Solution Provider
More informationHigh performance ETL Benchmark
High performance ETL Benchmark Author: Dhananjay Patil Organization: Evaltech, Inc. Evaltech Research Group, Data Warehousing Practice. Date: 07/02/04 Email: erg@evaltech.com Abstract: The IBM server iseries
More informationMuse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0
Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without
More informationPERFORMANCE STUDY. NexentaConnect View Edition Branch Office Solution. Nexenta Office of the CTO Murat Karslioglu
PERFORMANCE STUDY NexentaConnect View Edition Branch Office Solution Nexenta Office of the CTO Murat Karslioglu Table of Contents Desktop Virtualization for Small and Medium Sized Office... 3 Cisco UCS
More informationStreamServe Persuasion SP5 Microsoft SQL Server
StreamServe Persuasion SP5 Microsoft SQL Server Database Guidelines Rev A StreamServe Persuasion SP5 Microsoft SQL Server Database Guidelines Rev A 2001-2011 STREAMSERVE, INC. ALL RIGHTS RESERVED United
More informationInformatica Data Director Performance
Informatica Data Director Performance 2011 Informatica Abstract A variety of performance and stress tests are run on the Informatica Data Director to ensure performance and scalability for a wide variety
More informationAmadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator
WHITE PAPER Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com SAS 9 Preferred Implementation Partner tests a single Fusion
More informationQLIKVIEW SERVER MEMORY MANAGEMENT AND CPU UTILIZATION
QLIKVIEW SERVER MEMORY MANAGEMENT AND CPU UTILIZATION QlikView Scalability Center Technical Brief Series September 2012 qlikview.com Introduction This technical brief provides a discussion at a fundamental
More informationVirtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
More informationHP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
More informationOracle TimesTen In-Memory Database on Oracle Exalogic Elastic Cloud
An Oracle White Paper July 2011 Oracle TimesTen In-Memory Database on Oracle Exalogic Elastic Cloud Executive Summary... 3 Introduction... 4 Hardware and Software Overview... 5 Compute Node... 5 Storage
More informationHow to Choose Between Hadoop, NoSQL and RDBMS
How to Choose Between Hadoop, NoSQL and RDBMS Keywords: Jean-Pierre Dijcks Oracle Redwood City, CA, USA Big Data, Hadoop, NoSQL Database, Relational Database, SQL, Security, Performance Introduction A
More informationWhite Paper November 2015. Technical Comparison of Perspectium Replicator vs Traditional Enterprise Service Buses
White Paper November 2015 Technical Comparison of Perspectium Replicator vs Traditional Enterprise Service Buses Our Evolutionary Approach to Integration With the proliferation of SaaS adoption, a gap
More informationOracle Data Guard OTN Case Study SWEDISH POST
Oracle Data Guard OTN Case Study SWEDISH POST Corporate Profile Annual revenue EUR 2.5 billion 40,000 employees Serving 3 million homes and 800.000 businesses daily url: http://www.posten.se Disaster Recovery
More informationJava Performance. Adrian Dozsa TM-JUG 18.09.2014
Java Performance Adrian Dozsa TM-JUG 18.09.2014 Agenda Requirements Performance Testing Micro-benchmarks Concurrency GC Tools Why is performance important? We hate slow web pages/apps We hate timeouts
More informationVDI Optimization Real World Learnings. Russ Fellows, Evaluator Group
Russ Fellows, Evaluator Group SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material
More informationColumnstore Indexes for Fast Data Warehouse Query Processing in SQL Server 11.0
SQL Server Technical Article Columnstore Indexes for Fast Data Warehouse Query Processing in SQL Server 11.0 Writer: Eric N. Hanson Technical Reviewer: Susan Price Published: November 2010 Applies to:
More informationQLIKVIEW SERVER LINEAR SCALING
QLIKVIEW SERVER LINEAR SCALING QlikView Scalability Center Technical Brief Series June 212 qlikview.com Introduction This technical brief presents an investigation about how QlikView Server scales in performance
More informationPart 3 - Performance: How to Fine-tune Your ODM Solution. An InformationWeek Webcast Sponsored by
Part 3 - Performance: How to Fine-tune Your ODM Solution An InformationWeek Webcast Sponsored by Webcast Logistics Today s Presenters David Granshaw WODM Performance Architect (Events) Pierre-André Paumelle
More informationImprove Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database
WHITE PAPER Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Executive
More informationPerformance Modeling for Web based J2EE and.net Applications
Performance Modeling for Web based J2EE and.net Applications Shankar Kambhampaty, and Venkata Srinivas Modali Abstract When architecting an application, key nonfunctional requirements such as performance,
More informationA Comparison of Oracle Performance on Physical and VMware Servers
A Comparison of Oracle Performance on Physical and VMware Servers By Confio Software Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com Introduction Of all the tier one applications
More information