BIRT iserver: Performance and Scalability on IBM AIX

Size: px
Start display at page:

Download "BIRT iserver: Performance and Scalability on IBM AIX"

Transcription

1 The People Behind BIRT TM Technical White Paper BIRT iserver: Performance and Scalability on IBM AIX BIRT-based design sharing in Large-Scale Deployments: BIRT Reports, BIRT Studio, and BIRT Interactive Viewing Benchmark tests at the IBM Innovation Center show Actuate is again the industry s highest performing and most scalable reporting platform. Tests focused on the Collaborative Reporting capabilities of the BIRT iserver, specifically report generation, standard viewing, interactive viewing, on- demand reporting, and web-based report authoring of BIRT Report Studio. Highlights of tests on a 64-CPU core, 8-node BIRT iserver cluster include: For all activities, near-perfect scalability up to 64 CPU cores 1.8 billion report pages generated per day 43,000 active users viewing reports with the standard viewer 10,000 active users interactively viewing reports 10,500 active users running and viewing on-demand reports 1,000 active users of BIRT Studio on 6 Information Console CPU cores

2 Notice The information in this white paper is proprietary to Actuate Corporation ( Actuate ) and may not be used in any form without the prior consent of Actuate by Actuate Corporation. All rights reserved. Actuate Corporation trademarks and registered trademarks: Actuate, e.report, e.reporting, and Report Encyclopedia. All other trademarks are property of their respective owners. Version 2, March 19, 2010, Actuate Corporation 2

3 Table of Contents Notice Executive Summary High Performance for All Report Types Impact of Multiple Cores Per CPU Results Snapshot Introduction BIRT-Based Design Sharing Scalability and Performance Are Essential for Success Characteristics of Meaningful Benchmarks Types of Reporting Loads The Measurements that Matter BIRT on AIX: Results & Analysis Batch Report Generation of BIRT Reports Standard Viewing of Cached BIRT Reports Interactive Viewing of Cached BIRT Reports Results On-Demand Reporting: BIRT Reports Report Authoring with BIRT Studio Ad-hoc BIRT Reports authoring System Configuration Load Simulator Windows BIRT Information Console WebSphere 6.1 Windows BIRT iserver: 8 nodes, 64 CPUs IBM 64-bit AIX core IBM P Data Source Report Encyclopedia Load Simulator Windows BIRT Information Console WebSphere 6.1 Windows BIRT iserver: 8 nodes, 64 CPUs IBM 64-bit AIX core IBM P Data Source Report Encyclopedia Conclusion Appendix At a Glance Summary of Benchmark Results

4 Executive Summary Actuate introduced the Rich Information Application architecture as a breakthrough concept that makes all information users end users, power users, business users, report developers and application developers equal partners and collaborators in creating reports and information applications. Based on open-source Eclipse BIRT and the BIRT iserver, this unique architecture results in faster and more accurate report development, lower costs, and higher user satisfaction. BIRT deployments achieve business goals faster than traditional reporting deployments, are more successful, and grow faster. Keeping these characteristics in mind, Actuate implemented the BIRT-based RIAs on top of the BIRT iserver s scalable foundation to achieve three performance goals: 1. Allow smaller reporting applications even those with only a handful of users and reports to grow easily, quickly, and economically. 2. For applications of all sizes, provide best-in-class performance for the hardware used. 3. Support the world s largest reporting deployments, with millions of users and billions of reporting requests per day. Benchmark tests at the IBM Innovation Center for Business Partners proved Actuate achieved all three goals. The BIRT iserver can scale with near-perfect efficiency, generate billions of report pages per day, and support millions of users. High Performance for All Report Types This series of benchmark tests specifically exercised BIRT Reports and BIRT Studio on the BIRT iserver. Similar internal tests and past benchmarks show that other Actuate report types, including e.reports and BIRT Spreadsheet, have comparable performance. Customers can be confident that all Actuate report types meet the most demanding performance and scalability requirements, since they all run on the same industry-leading high performance platform, the BIRT iserver. Impact of Multiple Cores Per CPU This benchmark and other internal tests prove the BIRT iserver s scales in a near-linear fashion, whether it is across multiple cores within a CPU or across multiple CPUs within a machine. The total number of CPU cores, not the number of CPUs, determines the BIRT iserver s capacity. In other words, a machine with one CPU with two cores performs the same as a machine with two single-core CPUs, provided the CPU clock speed is the same for both machines. Results Snapshot The following table provides a snapshot of the benchmark results. Most test scenarios were on a 64-CPU core, 8-node AIX-based BIRT iserver configuration. The results all show Actuate s ability to handle millions of users and billions of reporting transactions while maintaining a high quality, quick-response experience for users. 4

5 Test Scenario Batch Report Generation of Cached BIRT Reports and BIRT Studio 1 Test Results Snapshot 1.8 billion report pages generated per day 98.9% scalability from 1 node (8 CPU cores) to 8 nodes (64 CPU cores) Viewing of Cached BIRT Reports and BIRT Studio Support for 43,000 active viewing users, implying support for user populations ranging from 430,000 to 43 million 92.1% scalability from 1 node (8 CPU cores) to 8 nodes (64 CPU cores) Interactive Viewing of Cached BIRT Reports and BIRT Studio Support for 10,000 active users interactively viewing reports and performing a variety of interactive operations, implying support for user populations from 100,000 to 10 million 99.8% scalability from 1 node (8 CPU cores) to 8 nodes (64 CPU cores) On-Demand Report Generation & Viewing of BIRT Reports 2 Support for 10,500 active users running and viewing ad-hoc reports, implying support for user populations from 105,000 to 10.5 million 97.7% scalability from 1 node (8 CPU cores) to 8 nodes (64 CPU cores) Authoring of BIRT Report Studio On 6 iportal CPU cores, support for 1,000 active users creating BusinessReports over the web using BusinessReport Studio 1 BIRT Studio are based on the same report design format as BIRT Reports, but are created by non-technical users using a web-based report designer called the BIRT Studio. Ad-hoc Web Reports are based on report templates created with the Eclipse BIRT Report Designer. 2 On-Demand Reports are reports that are generated at an end user s request and are immediately viewed without being stored in the report repository. On-Demand Reports are also called transient reports. 5

6 Introduction Actuate introduced the RIA Architecture. This breakthrough concept makes all information users end users, power users, business users, report developers and application developers equal partners and collaborators in developing reports. Based on open-source Eclipse BIRT and the BIRT iserver, this unique architecture results in: Faster and more accurate report development Lower costs Increased user satisfaction Quicker achievement of business goals Higher success rates Faster growth than traditional reporting deployments BIRT-based RIA deployments therefore require a platform designed to grow and accommodate ever-increasing performance demands. With this in mind, Actuate implemented BIRT on top of the BIRT iserver s proven scalable and high-performance foundation. This white paper details extensive benchmarks of the BIRT iserver, with special emphasis on production of BIRT content. The results show that once again the BIRT iserver is the highest performing and best scaling reporting platform in the industry. Customers can therefore feel confident that Actuate supports the most demanding applications with a minimum of hardware and will be able to accommodate large-scale future growth. 6

7 BIRT-Based Design Sharing With BIRT-based design sharing, users of all types can create reports, interact with them, and modify them using the Actuate environment most appropriate for their goals and skill level. They can share their report with other users, who can in turn interact with and alter the reports using different skill-specific environments. For content development, Actuate provides two BIRT-based skill-specific environments. The Actuate BIRT Designer Pro is a powerful, desktop-based report design tool for technical report developers. The BIRT Studio is an easy-to-use web-based report design tool for less technical users who want to create their own reports. There are two additional environments, both built into the Actuate Information Console, for end users to interact with the reports over the web. The Standard Viewer displays paginated report documents in HTML. The BIRT Interactive Viewer also displays paginated report documents in HTML, but it also lets end users interact with report documents: re- sorting, adding calculated columns, applying new grouping, hiding columns or charts, and so on. An example of BIRT-Based Design Sharing in action: A technical report developer creates a BIRT report, using the power of the Actuate BIRT Designer Pro. The developer deploys the finished report to the BIRT iserver. The BIRT iserver generates the report and produces a report document. Some end users view the report document through the Standard Viewer. Other end users view the report document through the BIRT Interactive Viewer. These users modify their view of the report by re-sorting, adding or subtracting columns, changing grouping, and so on. End users of the BIRT Interactive Viewer save their modified views of reports to the BIRT iserver. At a future time, the end users can re-run the report and see the new report document with their earlier modifications applied. Another example: A report developer creates a BIRT Studio template using the Actuate BIRT Designer Pro and deploys it to the BIRT iserver. A business user accesses BIRT Studio over the web. He starts creating a new report by selecting a BIRT template already deployed to the BIRT iserver. The business user author creates his report using BIRT Studio, and then deploys the report to the BIRT iserver. Other users can then run and view the report using either the BIRT Interactive Viewer or the Standard Viewer. Scalability and Performance Are Essential for Success Actuate leveraged the scalable foundation of the BIRT iserver to achieve three performance goals. All three goals are essential to supporting the unique demands of today s RIAs: 1. Allow smaller applications, even those with only a handful of users and reports, to grow easily, quickly and economically. 2. For applications of all sizes, provide best-in-class performance for the hardware used. 3. Support the world s largest reporting deployments, with user populations of millions of users, and reporting loads of billions of reports per day. 7

8 Goal 1: Scalability to Allow Rich Information Applications of All Sizes to Grow Economically Actuate achieves goal one, allowing smaller to medium-sized information applications to grow quickly and easily, using the BIRT iserver s built-in scalability. What is scalability? Simply put, a system that is perfectly scalable can double its capacity (for example, number of users, number of reports per second) by doubling the system s hardware. Scalability above 80% is very unusual in enterprise software. For most enterprise systems, each bit of hardware added to the system is used far less efficiently than the last. In fact, many enterprise software systems quickly reach a point where adding more hardware actually decreases performance instead of increasing it proportionately. Deploying such a limited, un-scalable reporting infrastructure can be disastrous if an information application becomes popular with users. Actuate has proved time and time again, with this benchmark and numerous others, that it provides the most scalable reporting platform in the industry. The BIRT iserver is capable of scaling to configurations in excess of 80 CPUs and is able to handle millions of users and billions of transactions per day. Goal 2: Provide Best in Class Performance The BIRT iserver is designed from the ground up to make the fullest possible use of all hardware available. This means that all CPU cores in all machines in a BIRT iserver cluster can lend their full power to the real work of the server: report generation and rendering. The BIRT iserver wastes little hardware resources, letting customers get the full benefit of the hardware they paid for. Further, excellent performance on configurations of all sizes means that all RIAs from the smallest to the very largest benefit. All reporting applications experience quick response, rapid report generation and rendering, and ability to serve many users of different types all at one time. Goal 3: Extreme Performance for the Largest Information Applications The BIRT iserver is designed to handle truly enormous applications that fully use 80+ CPUs, serve user populations of millions of users, and handle billions of reporting requests per day. This type of mega-performance benefits applications of ALL sizes, however, not just the world s largest and most demanding applications. Successful deployments, regardless of their initial size, nearly always grow in terms of numbers of users, number of report requests, and number of information applications. Organizations can be confident that no matter how large or how fast their reporting application grows, that their BIRT-based infrastructure will be able to scale simply by adding hardware to the system. 8

9 Characteristics of Meaningful Benchmarks All benchmark studies conducted by Actuate are specifically designed to provide customers with as much meaningful information and analysis as possible. With this design, customers can then apply this knowledge to their own applications and configurations. Types of Reporting Loads For BIRT, there are five types of load that require the vast majority of processing power: 1) Batch generation of reports. Reports generate in the background and produce cached report documents that are stored for future viewing. These usually run on a schedule, most often in the middle of the night. 2) Standard viewing of cached reports. Users view pages of report documents that were batch-generated earlier. 3) Interactive viewing of cached reports. Users view pages of reports that were batch-generated earlier, and also interact with the report documents: resorting, filtering, hiding columns, adding calculated columns, and so on. 4) On-demand reporting. Users request reports be generated right now and rendered for viewing while they wait. 5) Report authoring by users. Users use the web-based BIRT Studio to create their own reports. The report authoring process usually triggers several requests to the reporting server. In a typical reporting deployment, all five loads are present in proportions that vary depending on the time of day and usage patterns. Any meaningful benchmark needs to characterize performance of all five types of reporting loads. The Measurements that Matter Certain types of benchmark tests and results measurements are needed to properly depict a reporting platform s performance and scalability. To that end, for each configuration, the Actuate benchmark directly measures: 1) Throughput rates for all types of workloads 2) Scaling efficiency across configurations of varying sizes 3) The number of active users supported for interactive tests (standard viewing, interactive viewing, on-demand reporting, and ad-hoc reports authoring) 4) Average user response times for interactive tests Throughput Throughput is the most critical benchmark measurement because it is the most precise measure of system capacity 3. Throughput is how much content can be generated per second, or how much content can be served to users per second. Scalability can only be proven by measuring how a system s throughput increases as hardware resources increase. For batch generation tests, throughput is measured in report pages generated per second. 3 As explained in more detail in subsequent sections, the number of users and response times are not true measures of system capacity. This is because the number of users can often be substantially increased by increasing the acceptable response time. The only way to normalize benchmarks where both the number of users and the response times change from configuration to configuration is to use these figures to derive the system throughput. 9

10 For standard viewing, interactive viewing, on-demand reporting, and ad-hoc reports authoring tests, throughput is measured in user requests per second. For Actuate, when a user makes a user request to view or create a report, usually just one URL is triggered. So, Actuate user request throughput is measured in URLs per second. Some competitive benchmarks do not provide throughput figures, but instead provide only numbers of users and average response time. These figures alone cannot adequately describe system capacity or scalability because they can be manipulated. For example, the number of users can always be dramatically increased by sacrificing acceptable response time. Similarly, the response time can be lowered if each active user makes fewer requests. Scaling Efficiency System scalability is measured as scaling efficiency. In a system that scales in perfectly linear fashion, the scaling efficiency is 100%, and each additional machine added to the system configuration is as efficiently used as the If scaling efficiency is greater than 95%, systems are typically referred to as having linear scalability. Scaling Efficiency Actual Throughput Theoretical Throughput Theoretical Throughput Throughput for 1 CPU Number of CPUs If scaling efficiency is less than 70%, the system is not scalable and should not be considered for enterprise deployments. Scaling efficiency is calculated according to the shown formula. Number of Active Users An important measurement for reporting platforms is the number of active users that can be supported on a given configuration. This metric is important for any test that measures the capacity of the server to handle interactive user requests, such as viewing of cached reports or on-demand report generation and viewing. The definition of active users requires some explanation. Any application supports a population of named users. Named users are all the distinct users that are able to log in to use the server. At any time, some of these named users are active users, meaning they are logged in and actively submitting report viewing and generation requests every X seconds on average, where X is referred to as the think time. 4 The portion of named users that are active at any one time depends on whether the application is deployed within the company, on the extranet, or on the Internet. Figure 1 describes typical percentages. Inside-the-firewall applications Extranet applications Internet applications Number of active users (% of total named users) 1-10% 0.1 1% < 0.1% Figure 1 The ratio of active users to named users varies according to application characteristics. For applications deployed inside the firewall, a higher proportion of total named users are active than for Internet reporting applications. 4 User counts are more accurate and reflective of real-world applications when a random think time is used. In Actuate s experience, think times that are random but center on a 30-second average most accurately reflect real-world user behavior for reporting platforms. The randomness of the think time creates peaks and valleys in the reporting load, similar to what real-world users generate. Tests with zero think time keep an Absolutely constant load for the entire test and do not test the system s ability to continue operating under variable load. 10

11 Response Time Throughput, user counts, and scaling efficiency are meaningless and can be manipulated without strict limitations on acceptable response time. This is because an application is simply unusable if response times are too long. According to Jakob Nielsen, renowned usability expert, 10 seconds is about the limit for keeping the user s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish. For Actuate benchmarks, the following response time criteria apply to all interactive tests: Average and median response time must be under 5 seconds. 90% of all requests must be serviced within 5 seconds. 100% of all requests must complete within 120 seconds. Only the results of tests that meet the above criteria are published within this white paper. 11

12 BIRT on AIX: Results & Analysis For most test scenarios, the BIRT iserver was tested in cluster configurations ranging from 1 to 8 node partitions where each node had 8 CPU cores. In the largest 8-node configuration, there were 64 CPU cores total. See the System Configuration section for more details. Batch Report Generation of BIRT Reports Description The Batch Report Generation scenario tests simultaneous generation of multiple BIRT reports, where all reports are scheduled to start running at the same time. The resultant report documents are stored in the BIRT iserver Encyclopedia; users do not wait to view the results. This test determines the average number of report pages that the BIRT iserver can generate per second over a prolonged period of time. Results In the 64-CPU core configuration, throughput reached 21,653 pages/second, the equivalent of 1.8 billion report pages per day. Scaling efficiency was over 98%, implying that with additional CPU power throughput could be increased even more. Pages per Second 25,000 20,000 15,000 10,000 Batch Generation 5, CPU Cores Figure 2 Batch Generation: Near-perfect linear (> 98%) scalability across 8 nodes with 64 CPU cores. The BIRT iserver sustained a generation rate of 21,653 report pages per second. 1 Node 8 CPU cores 2 Nodes 16 CPU cores 4 Nodes 32 CPU cores 8 Nodes 64 CPU cores Throughput: pages/sec 2,736 5,769 11,178 21,653 Figure 3 Batch Generation Results. 12

13 Standard Viewing of Cached BIRT Reports Description This test simulates end-users viewing reports through the Standard Viewer. Multiple users simultaneously log in and view random pages of cached report documents. The main outputs from this test are: Server throughput in viewing requests per second (URLs/second) The maximum number of active users that can be supported while meeting the 5-second response time criteria. Results On 64 CPU cores, the BIRT iserver supported 43,000 active users. This translates into support for 430,000 to 43 million named users. These users collectively submitted 1,313.2 user viewing requests per second, the equivalent of 113 million viewing requests per day. Scaling efficiency was over 92%, implying support for even larger hardware configurations. Figure 4 Standard Viewing of Cached Reports. On 64 CPU cores, Actuate supported 43,000 active users, or 671 users per CPU core. 1 Node 8 CPU cores 2 Nodes 16 CPU cores 4 Nodes 32 CPU cores 8 Nodes 64 CPU cores Throughput: Viewing Requests per second ,313.2 Active Users 5,600 11,500 22,000 43,000 Response Time (seconds) Figure 5 Complete Results for Standard Viewing of Cached Reports. Response times averaged less than one second for all configurations. 13

14 Interactive Viewing of Cached BIRT Reports Description The Interactive Viewing test simulates multiple users simultaneously logging in to Information Console and using the BIRT Interactive Viewer to access random pages in report documents. While viewing the reports, users performing interactive operations like sorting and filtering to modify the view of the report document. Figure 6 describes the operations that each Interactive Viewing test user performs. Each active user performs these operations in succession, with a think time interval between each operation. Think times are random but average 30 seconds. The randomness of the interval between operations allows this test to more closely emulate real-world conditions. This test measures: Throughput, in interactive viewing requests per second. This is reported in URLs/sec. The number active users that can be supported while meeting the 5-second response time criteria. Figure 6 Interactive Viewing Test: operations performed by each active user. Each active user in this test submits a URL for the first operation, waits the given think time (randomly determined but averaging 30 seconds), and then submits the URL for the next operation, waits a random think time, and so on. Interactive Viewing Test : Operations performed by each active user Login to Information Console Load report in BIRT Interactive Viewer Get a report page Change text of column label Change font of a column label Undo the report s current grouping Create new grouping Sort on new grouping Create calculated column Add aggregation to report group Set a filter Change alignment of data displayed in a column Hide a column 14

15 Results The results of this test are excellent. Interactive viewing requires many rapid-fire changes and re-rendering of report documents as users modify reports. Even under this intensive load, the BIRT iserver still supported a very large number of users while maintaining very low response times: over 10,000 active users on 64 CPU cores. 12,000 Interactive Viewing Active Users 10,000 8,000 6,000 4,000 2, CPU Cores Figure 7 Interactive Viewing shows over 99% scaling efficiency over 64 CPU cores. Over 10,000 active users can be supported on a 64- CPU core configuration. 1 Node 8 CPU cores 2 Nodes 16 CPU cores 4 Nodes 32 CPU cores 8 Nodes 64 CPU cores Throughput: Viewing Requests per second Active Users 1,200 2,450 4,850 10,000 Response Time (seconds) Figure 8 Average Response Times for BIRT Interactive Viewing. Response times averaged less than one second for all test configurations. 15

16 On-Demand Reporting: BIRT Reports Description The On-Demand tests simulates multiple users logging in. Each user then generates a report on demand and immediately views the results in the web browser. The resultant report document is not saved to the BIRT iserver Encyclopedia. Like the Viewing tests, this test determines maximum throughput in URLs/second and the maximum number of active users. Results The BIRT iserver proved able to support 10,500 active users of on-demand reports. These users collectively triggered URLs per second, or 26.6 million URLs per day. Total user populations of 105,000 to 10.5 million can be supported, depending on the characteristics of the application. The BIRT iserver again demonstrated near-perfect linear scalability, with a scaling efficiency of greater than 90%. 12,000 On-Demand Reporting 350 Active Users 10,000 8,000 6,000 4, URLs Per Second 2, CPU Cores Active Users URLs per Second Figure 9 On-Demand Reporting. On 64 CPU cores, the BIRT iserver was able to support 10,500 active users of on-demand report and achieve >90% scaling efficiency. 1 Node 8 CPU cores 2 Nodes 16 CPU cores 4 Nodes 32 CPU cores 8 Nodes 64 CPU cores Throughput: Viewing Requests per second Active Users 1,400 1,700 4,800 10,500 Response Time (seconds) Figure 10 On-Demand Reporting Response Time. Even under extreme load, Actuate was able to handle 10,500 active users. 16

17 Report Authoring with BIRT Studio Description This test simulates multiple active users concurrently creating their own reports over the web using BIRT Studio. Each active user performs a sequence of operations, all of which are typical of users creating a report from a report template. Most operations are performed browser-side via the AJAX technology built into BIRT Studio or are performed within the Information Console tier, although some operations trigger URL requests back to the BIRT iserver. Between each operation, active users wait for a think time of random length centered on a 30-second average. Figure 11 details the report authoring tasks performed by each simulated active user. BIRT Studio Ad-hoc Report Authoring Operations performed by each active user Login to Information Console Launch BIRT Studio, select an Information Object as a data source Add 5 columns in one operation Add sorting to one column Add grouping by one column Insert a calculated column Add aggregation to grouping Change text of column label Set filter on a column Change alignment of data in a column Set currency format of data in a column Set a filter Change alignment of data displayed in a column Hide a column Figure 11 Report Authoring with BIRT Studio: sequence of operation s performed by each simulated active user. Note that for this scenario, the number of CPU cores in the Information Console tier is much more relevant than the number of cores in the BIRT iserver tier. This is unlike the other test scenarios, because ad-hoc BIRT Reports authoring primarily exercises the Information Console instead of the BIRT iserver. BIRT Studio is an AJAX application, and most steps in designing an ad-hoc BIRT Report do not trigger requests back to the BIRT iserver. Instead, as much report authoring activity as possible is performed browser-side or in the Information Console web-tier, in order to provide the user with a highly responsive, interactive experience. 17

18 Results In a three-node Information Console configuration totaling 6 CPU cores, BIRT Studio supported 1,000 active users authoring their own reports over the web. The BIRT iserver tier was quite over-sized, with a single 8-CPU core node. Additional hardware configurations were not tested due to lab time schedule constraints, but Actuate is confident that BIRT Studio can comfortably scale on larger hardware configurations. Ad-hoc BIRT Reports authoring Throughput: Ad-hoc BIRT Reports Requests per second (URLs/s) 3 iportal Nodes (6 CPU cores) 32.5 Active Users 1,000 Response Time (seconds) 1.2 Figure 12 Results of Ad-hoc BIRT Reports authoring on a configuration with 6 Information Console CPU 18

19 System Configuration Actuate hopes this benchmark will give organizations a clear picture of how their reporting applications might perform on the BIRT iserver. Thus, the benchmark test configurations exercises all components found in realworld reporting deployments, beyond the BIRT iserver: A load simulator mimicked actual users by simultaneously submitting multiple URL requests to Actuate Information Console. Actuate Information Console, running on the IBM WebSphere Application Server, relayed requests to the BIRT iserver. For report generation requests, the BIRT iserver fetched report data from a DB2 database. For report viewing requests, the BIRT iserver retrieved the report document from the iserver Report Encyclopedia and rendered it in HTML. The Information Console then presented the results to the end-user. Figure 13 illustrates the overall system architecture tested at the IBM Innovation Center, used by the Batch Generation, Standard Viewing, Interactive Viewing, and On-Demand Reporting test scenarios. BIRT Information Console BIRT iserver: 8 nodes, Load Simulator WebSphere CPUs IBM 64-bit AIX Report Windows Windows core core IBM IBM P595 P595 Data Source Source Encyclopedia 8-way 1.9 GHz IBM p595 LPAR 8-way 1.9 GHz IBM p595 LPAR DB2 Database 8-way 1.9 GHz IBM p595 LPAR 8-way 1.9 GHz IBM p595 LPAR DB2 Server 8 CPU 1.9 GHz IBM System p595 Gbit Gbit 8-way 1.9 GHz IBM p595 LPAR Gbit 8-way 1.9 GHz IBM p595 LPAR 8-way 1.9 GHz IBM p595 LPAR 8-way 1.9 GHz IBM p595 LPAR Gbit NFS Server 8 CPU 1.9 GHz IBM System p595 Gbit SAN Gbit IBM DS8100 Figure 13 Benchmark environment. Actuate simulates users in an environment closely resembling a real-life configuration, with an application server layer, database, and file server. 19

20 The BIRT iserver was deployed on an IBM System p5 p595 running 64-bit AIX 5.3. The p595 was divided into 8 LPARs that each had 4 dual-core CPUs. The 8 LPARs made up the 8 nodes of the BIRT iserver cluster. For the web tier, 15 separate Information Console instances were deployed, each on a dual-cpu Intel-based machine running IBM WebSphere 6.1 on Windows To simulate the actions of end-users, 8 Intel-based machines with 16 CPUs total were dedicated to load simulation, generating reporting requests from simulated users, and submitting those requests to Information Console instances. For the Ad-hoc Authoring scenario, a single configuration was tested due to time constraints. Because BIRT Studio is an AJAX application, most of the work is done in the Information Console web-tier, not BIRT iserver. Thus, this test exercised 3 Information Console instances, each with a single dual-core CPU, for a total of 6 Information Console CPU cores. The BIRT iserver was deployed on a single 8-core LPAR node. Unlike all the other scenarios, this test was constrained by Information Console CPU power; the BIRT iserver was significantly over-powered and tapped only a fraction of its resources. BIRT Information Console BIRT iserver: 8 nodes, Load Simulator WebSphere CPUs IBM 64-bit AIX Report Windows Windows core core IBM IBM P595 P595 Data Source Source Encyclopedia Gbit Gbit 8-way 1.9 GHz IBM p595 LPAR Gbit DB2 Server 8 CPU 1.9 GHz IBM System p595 NFS Server 8 CPU 1.9 GHz IBM System p595 Gbit SAN DB2 Database Gbit IBM DS8100 Figure 14 Configuration for BIRT Report Studio report authoring benchmark. Only smaller configurations were tested, due to lab time schedule. For all tests, an IBM System Storage DS8100 high-performance disk storage array housed the BIRT iserver report encyclopedia. Actuate configured the disk array as RAID 10 for both high performance (striping) and high availability (mirroring). All Actuate server nodes communicated with the DS8100 via a dedicated NFS file server. Note that while IBM System Storage DS8100 product is suitable for customers who require large-scale storage with 24x7 reliability, it is possible to configure the Actuate server in a lower-cost, less reliable configuration. A SAN is not required. Where possible, the benchmark configuration was deliberately designed so that the BIRT iserver itself was the limiting factor. By providing more than enough processing power to all other system components and by using a network and storage devices that are more than fast enough, we were able to test the true limitations of the BIRT iserver. The exception is the Ad-hoc BIRT Report authoring scenario, where the configuration was designed to make the BIRT Information Console not the BIRT iserver the constraining system component 20

21 Component Software Number of nodes Machine Model Number CPUs CPU Speed RAM O/S BIRT iserver nodes BIRT iserver 8 nodes (1 node for Ad-hoc BIRT Reports test) 8-core LPAR within IBM System p5 p595 8 CPU cores per node 64 CPU cores total (8 CPU cores for Ad-hoc BIRT Reports scenario) 1.9 GHz POWER5 16 GB per node AIX 64-bit Version 5.3 Active Portal Application Servers BIRT Information Console IBM WebSphere Application Server nodes (3 nodes for Ad-hoc BIRT Reports test) Dual CPU 3.6 GHz IBM e-server xseries x336 2 CPU each 30 CPUs total (6 CPUs total for Ad-hoc BIRT Reports scenario) 3.6 GHz Intel P4 4 GB each Windows 2003 Load Simulator Dual CPU 3.6 GHz IBM xseries x336 8 nodes Dual CPU 3.6 GHz IBM e-server xseries x336 2 CPUs each 16 CPUs total 3.6 GHz Intel P4 4 GB each Windows 2003 DB2 database Server IBM DB2 8.2 DB2 Server 1 node POWER5 8 CPUs 1.9 GHz POWER5 32 GB AIX 64-bit Version 5.3 Encyclopedia File Server NFS 1 node POWER5 8 CPUs 1.9 GHz POWER5 32 GB AIX 64-bit Version 5.3 BIRT Encyclopedia IBM System Storage DS8100, configured with 32 disks in RAID 10 Attached to Encyclopedia File Server via SAN Fabric Figure 15 Hardware configuration for the IBM Actuate Benchmark. 21

22 Conclusion Recent benchmarks at the IBM Innovation Center for Business Partners proved the BIRT iserver, and more specifically BIRT on iserver, is able to generate billions of report pages per day and support millions of named users. While this series of benchmark tests specifically exercised BIRT Reports on the BIRT iserver, e.reports and BIRT Spreadsheet show comparable performance. Customers can be confident that all Actuate report types meet the most demanding performance and scalability requirements, since they are all generated on the same industryleading high performance platform the BIRT iserver. 22

23 Appendix At a Glance Summary of Benchmark Results Component Number CPU cores Throughput Number of Users Scaling Efficiency Average Response Time Batch Generation of Cached Reports 64 21,653 pgs/sec n/a 98.9% n/a Standard Viewing Cached Reports 64 1,313.2 viewing requests / sec 43,000 active users 92.1% 0.64 sec Interactive Viewing viewing requests / sec 10,000 active users 99.8% 1.31 sec On-Demand Reporting report requests/sec 10,500 users 97.7% 3.27 sec BIRT Studio 6 (iportal) 32.5 report requests/sec 1,000 active users creating reports n/a 1.20 sec Actuate Corporation 2207 Bridgepointe Parkway Suite 500 San Mateo, CA Tel: (888) Web:

Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition

Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition Liferay Portal Performance Benchmark Study of Liferay Portal Enterprise Edition Table of Contents Executive Summary... 3 Test Scenarios... 4 Benchmark Configuration and Methodology... 5 Environment Configuration...

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Kronos Workforce Central on VMware Virtual Infrastructure

Kronos Workforce Central on VMware Virtual Infrastructure Kronos Workforce Central on VMware Virtual Infrastructure June 2010 VALIDATION TEST REPORT Legal Notice 2010 VMware, Inc., Kronos Incorporated. All rights reserved. VMware is a registered trademark or

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

System Requirements Table of contents

System Requirements Table of contents Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5

More information

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server) Scalability Results Select the right hardware configuration for your organization to optimize performance Table of Contents Introduction... 1 Scalability... 2 Definition... 2 CPU and Memory Usage... 2

More information

BIRT in the Cloud: Deployment Options for ActuateOne

BIRT in the Cloud: Deployment Options for ActuateOne Technical White Paper BIRT in the Cloud: Deployment Options for ActuateOne With the increase in information analysis needs and an explosion of digital information, companies today are faced with the prospect

More information

Esri ArcGIS Server 10 for VMware Infrastructure

Esri ArcGIS Server 10 for VMware Infrastructure Esri ArcGIS Server 10 for VMware Infrastructure October 2011 DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction... 3 Esri ArcGIS Server 10 Overview.... 3 VMware Infrastructure

More information

A Scalability Study for WebSphere Application Server and DB2 Universal Database

A Scalability Study for WebSphere Application Server and DB2 Universal Database A Scalability Study for WebSphere Application and DB2 Universal Database By Yongli An, Tsz Kin Tony Lau, and Peter Shum DB2 Universal Database Performance & Advanced Technology IBM Toronto Lab, IBM Canada

More information

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009

More information

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL Dr. Allon Cohen Eli Ben Namer info@sanrad.com 1 EXECUTIVE SUMMARY SANRAD VXL provides enterprise class acceleration for virtualized

More information

Implementation & Capacity Planning Specification

Implementation & Capacity Planning Specification White Paper Implementation & Capacity Planning Specification Release 7.1 October 2014 Yellowfin, and the Yellowfin logo are trademarks or registered trademarks of Yellowfin International Pty Ltd. All other

More information

Cognos Performance Troubleshooting

Cognos Performance Troubleshooting Cognos Performance Troubleshooting Presenters James Salmon Marketing Manager James.Salmon@budgetingsolutions.co.uk Andy Ellis Senior BI Consultant Andy.Ellis@budgetingsolutions.co.uk Want to ask a question?

More information

Informatica Data Director Performance

Informatica Data Director Performance Informatica Data Director Performance 2011 Informatica Abstract A variety of performance and stress tests are run on the Informatica Data Director to ensure performance and scalability for a wide variety

More information

Tableau Server 7.0 scalability

Tableau Server 7.0 scalability Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different

More information

Product Guide. Sawmill Analytics, Swindon SN4 9LZ UK sales@sawmill.co.uk tel: +44 845 250 4470

Product Guide. Sawmill Analytics, Swindon SN4 9LZ UK sales@sawmill.co.uk tel: +44 845 250 4470 Product Guide What is Sawmill Sawmill is a highly sophisticated and flexible analysis and reporting tool. It can read text log files from over 800 different sources and analyse their content. Once analyzed

More information

White Paper. Recording Server Virtualization

White Paper. Recording Server Virtualization White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...

More information

Netwrix Auditor for Exchange

Netwrix Auditor for Exchange Netwrix Auditor for Exchange Quick-Start Guide Version: 8.0 4/22/2016 Legal Notice The information in this publication is furnished for information use only, and does not constitute a commitment from Netwrix

More information

Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER

Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform The benefits

More information

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0 Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without

More information

Tableau Server Scalability Explained

Tableau Server Scalability Explained Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted

More information

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7 Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:

More information

High performance ETL Benchmark

High performance ETL Benchmark High performance ETL Benchmark Author: Dhananjay Patil Organization: Evaltech, Inc. Evaltech Research Group, Data Warehousing Practice. Date: 07/02/04 Email: erg@evaltech.com Abstract: The IBM server iseries

More information

Understanding the Benefits of IBM SPSS Statistics Server

Understanding the Benefits of IBM SPSS Statistics Server IBM SPSS Statistics Server Understanding the Benefits of IBM SPSS Statistics Server Contents: 1 Introduction 2 Performance 101: Understanding the drivers of better performance 3 Why performance is faster

More information

IBM RATIONAL PERFORMANCE TESTER

IBM RATIONAL PERFORMANCE TESTER IBM RATIONAL PERFORMANCE TESTER Today, a major portion of newly developed enterprise applications is based on Internet connectivity of a geographically distributed work force that all need on-line access

More information

White Paper February 2010. IBM InfoSphere DataStage Performance and Scalability Benchmark Whitepaper Data Warehousing Scenario

White Paper February 2010. IBM InfoSphere DataStage Performance and Scalability Benchmark Whitepaper Data Warehousing Scenario White Paper February 2010 IBM InfoSphere DataStage Performance and Scalability Benchmark Whitepaper Data Warehousing Scenario 2 Contents 5 Overview of InfoSphere DataStage 7 Benchmark Scenario Main Workload

More information

Netwrix Auditor for SQL Server

Netwrix Auditor for SQL Server Netwrix Auditor for SQL Server Quick-Start Guide Version: 8.0 4/22/2016 Legal Notice The information in this publication is furnished for information use only, and does not constitute a commitment from

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

Performance White Paper

Performance White Paper Sitecore Experience Platform 8.1 Performance White Paper Rev: March 11, 2016 Sitecore Experience Platform 8.1 Performance White Paper Sitecore Experience Platform 8.1 Table of contents Table of contents...

More information

Performance Optimization Guide

Performance Optimization Guide Performance Optimization Guide Publication Date: July 06, 2016 Copyright Metalogix International GmbH, 2001-2016. All Rights Reserved. This software is protected by copyright law and international treaties.

More information

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

Scalability. Microsoft Dynamics GP 10.0. Benchmark Performance: Advantages of Microsoft SQL Server 2008 with Compression.

Scalability. Microsoft Dynamics GP 10.0. Benchmark Performance: Advantages of Microsoft SQL Server 2008 with Compression. Scalability Microsoft Dynamics GP 10.0 Benchmark Performance: Advantages of Microsoft SQL Server 2008 with Compression White Paper May 2009 Contents Introduction... 3 Summary Results... 3 Benchmark Test

More information

QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE

QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QlikView Technical Brief April 2011 www.qlikview.com Introduction This technical brief covers an overview of the QlikView product components and architecture

More information

a division of Technical Overview Xenos Enterprise Server 2.0

a division of Technical Overview Xenos Enterprise Server 2.0 Technical Overview Enterprise Server 2.0 Enterprise Server Architecture The Enterprise Server (ES) platform addresses the HVTO business challenges facing today s enterprise. It provides robust, flexible

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

Capacity Planning for Microsoft SharePoint Technologies

Capacity Planning for Microsoft SharePoint Technologies Capacity Planning for Microsoft SharePoint Technologies Capacity Planning The process of evaluating a technology against the needs of an organization, and making an educated decision about the configuration

More information

HP reference configuration for entry-level SAS Grid Manager solutions

HP reference configuration for entry-level SAS Grid Manager solutions HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2

More information

An Oracle White Paper Released April 2008

An Oracle White Paper Released April 2008 Performance and Scalability Benchmark: Siebel CRM Release 8.0 Industry Applications on HP BL460c Servers running Red Hat Enterprise Linux 4.0 and Oracle 10gR2 DB on HP BL460C An Oracle White Paper Released

More information

SanDisk SSD Boot Storm Testing for Virtual Desktop Infrastructure (VDI)

SanDisk SSD Boot Storm Testing for Virtual Desktop Infrastructure (VDI) WHITE PAPER SanDisk SSD Boot Storm Testing for Virtual Desktop Infrastructure (VDI) August 214 951 SanDisk Drive, Milpitas, CA 9535 214 SanDIsk Corporation. All rights reserved www.sandisk.com 2 Table

More information

Performance and scalability of a large OLTP workload

Performance and scalability of a large OLTP workload Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............

More information

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash

More information

Unprecedented Performance and Scalability Demonstrated For Meter Data Management:

Unprecedented Performance and Scalability Demonstrated For Meter Data Management: Unprecedented Performance and Scalability Demonstrated For Meter Data Management: Ten Million Meters Scalable to One Hundred Million Meters For Five Billion Daily Meter Readings Performance testing results

More information

An Oracle Benchmarking Study February 2011. Oracle Insurance Insbridge Enterprise Rating: Performance Assessment

An Oracle Benchmarking Study February 2011. Oracle Insurance Insbridge Enterprise Rating: Performance Assessment An Oracle Benchmarking Study February 2011 Oracle Insurance Insbridge Enterprise Rating: Performance Assessment Executive Overview... 1 RateManager Testing... 2 Test Environment... 2 Test Scenarios...

More information

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman. WHITE PAPER All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.nl 1 Monolithic shared storage architectures

More information

Improving Grid Processing Efficiency through Compute-Data Confluence

Improving Grid Processing Efficiency through Compute-Data Confluence Solution Brief GemFire* Symphony* Intel Xeon processor Improving Grid Processing Efficiency through Compute-Data Confluence A benchmark report featuring GemStone Systems, Intel Corporation and Platform

More information

An Oracle White Paper Released October 2008

An Oracle White Paper Released October 2008 Performance and Scalability Benchmark for 10,000 users: Siebel CRM Release 8.0 Industry Applications on HP BL460c Servers running Red Hat Enterprise Linux 4.0 and Oracle 10gR2 DB on HP BL680C An Oracle

More information

The Methodology Behind the Dell SQL Server Advisor Tool

The Methodology Behind the Dell SQL Server Advisor Tool The Methodology Behind the Dell SQL Server Advisor Tool Database Solutions Engineering By Phani MV Dell Product Group October 2009 Executive Summary The Dell SQL Server Advisor is intended to perform capacity

More information

Technical White Paper The Excel Reporting Solution for Java

Technical White Paper The Excel Reporting Solution for Java Technical White Paper The Excel Reporting Solution for Java Using Actuate e.spreadsheet Engine as a foundation for web-based reporting applications, Java developers can greatly enhance the productivity

More information

Newsletter 4/2013 Oktober 2013. www.soug.ch

Newsletter 4/2013 Oktober 2013. www.soug.ch SWISS ORACLE US ER GRO UP www.soug.ch Newsletter 4/2013 Oktober 2013 Oracle 12c Consolidation Planer Data Redaction & Transparent Sensitive Data Protection Oracle Forms Migration Oracle 12c IDENTITY table

More information

MEGA Web Application Architecture Overview MEGA 2009 SP4

MEGA Web Application Architecture Overview MEGA 2009 SP4 Revised: September 2, 2010 Created: March 31, 2010 Author: Jérôme Horber CONTENTS Summary This document describes the system requirements and possible deployment architectures for MEGA Web Application.

More information

Manjrasoft Market Oriented Cloud Computing Platform

Manjrasoft Market Oriented Cloud Computing Platform Manjrasoft Market Oriented Cloud Computing Platform Innovative Solutions for 3D Rendering Aneka is a market oriented Cloud development and management platform with rapid application development and workload

More information

Cisco Integration Platform

Cisco Integration Platform Data Sheet Cisco Integration Platform The Cisco Integration Platform fuels new business agility and innovation by linking data and services from any application - inside the enterprise and out. Product

More information

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000 Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products

More information

IBM Storwize V5000. Designed to drive innovation and greater flexibility with a hybrid storage solution. Highlights. IBM Systems Data Sheet

IBM Storwize V5000. Designed to drive innovation and greater flexibility with a hybrid storage solution. Highlights. IBM Systems Data Sheet IBM Storwize V5000 Designed to drive innovation and greater flexibility with a hybrid storage solution Highlights Customize your storage system with flexible software and hardware options Boost performance

More information

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55% openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive

More information

Analysis of VDI Storage Performance During Bootstorm

Analysis of VDI Storage Performance During Bootstorm Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process

More information

Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips

Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips Technology Update White Paper High Speed RAID 6 Powered by Custom ASIC Parity Chips High Speed RAID 6 Powered by Custom ASIC Parity Chips Why High Speed RAID 6? Winchester Systems has developed High Speed

More information

SanDisk Lab Validation: VMware vsphere Swap-to-Host Cache on SanDisk SSDs

SanDisk Lab Validation: VMware vsphere Swap-to-Host Cache on SanDisk SSDs WHITE PAPER SanDisk Lab Validation: VMware vsphere Swap-to-Host Cache on SanDisk SSDs August 2014 Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com 2 Table of Contents

More information

How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)

How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory) WHITE PAPER Oracle NoSQL Database and SanDisk Offer Cost-Effective Extreme Performance for Big Data 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Abstract... 3 What Is Big Data?...

More information

Scalability and Performance Report - Analyzer 2007

Scalability and Performance Report - Analyzer 2007 - Analyzer 2007 Executive Summary Strategy Companion s Analyzer 2007 is enterprise Business Intelligence (BI) software that is designed and engineered to scale to the requirements of large global deployments.

More information

Initial Hardware Estimation Guidelines. AgilePoint BPMS v5.0 SP1

Initial Hardware Estimation Guidelines. AgilePoint BPMS v5.0 SP1 Initial Hardware Estimation Guidelines Document Revision r5.2.3 November 2011 Contents 2 Contents Preface...3 Disclaimer of Warranty...3 Copyright...3 Trademarks...3 Government Rights Legend...3 Virus-free

More information

The IBM Cognos Platform for Enterprise Business Intelligence

The IBM Cognos Platform for Enterprise Business Intelligence The IBM Cognos Platform for Enterprise Business Intelligence Highlights Optimize performance with in-memory processing and architecture enhancements Maximize the benefits of deploying business analytics

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

Maximum performance, minimal risk for data warehousing

Maximum performance, minimal risk for data warehousing SYSTEM X SERVERS SOLUTION BRIEF Maximum performance, minimal risk for data warehousing Microsoft Data Warehouse Fast Track for SQL Server 2014 on System x3850 X6 (95TB) The rapid growth of technology has

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Performance Optimization Guide Version 2.0

Performance Optimization Guide Version 2.0 [Type here] Migration Optimization Performance Optimization Guide Version 2.0 Publication Date: March 27, 2014 Copyright 2014 Metalogix International GmbH. All Rights Reserved. This software is protected

More information

IBM Software Information Management Creating an Integrated, Optimized, and Secure Enterprise Data Platform:

IBM Software Information Management Creating an Integrated, Optimized, and Secure Enterprise Data Platform: Creating an Integrated, Optimized, and Secure Enterprise Data Platform: IBM PureData System for Transactions with SafeNet s ProtectDB and DataSecure Table of contents 1. Data, Data, Everywhere... 3 2.

More information

The functionality and advantages of a high-availability file server system

The functionality and advantages of a high-availability file server system The functionality and advantages of a high-availability file server system This paper discusses the benefits of deploying a JMR SHARE High-Availability File Server System. Hardware and performance considerations

More information

White Paper on Consolidation Ratios for VDI implementations

White Paper on Consolidation Ratios for VDI implementations White Paper on Consolidation Ratios for VDI implementations Executive Summary TecDem have produced this white paper on consolidation ratios to back up the return on investment calculations and savings

More information

PARALLELS CLOUD SERVER

PARALLELS CLOUD SERVER PARALLELS CLOUD SERVER Performance and Scalability 1 Table of Contents Executive Summary... Error! Bookmark not defined. LAMP Stack Performance Evaluation... Error! Bookmark not defined. Background...

More information

Understanding the Performance of an X550 11-User Environment

Understanding the Performance of an X550 11-User Environment Understanding the Performance of an X550 11-User Environment Overview NComputing's desktop virtualization technology enables significantly lower computing costs by letting multiple users share a single

More information

Scalability Factors of JMeter In Performance Testing Projects

Scalability Factors of JMeter In Performance Testing Projects Scalability Factors of JMeter In Performance Testing Projects Title Scalability Factors for JMeter In Performance Testing Projects Conference STEP-IN Conference Performance Testing 2008, PUNE Author(s)

More information

Oracle Applications Release 10.7 NCA Network Performance for the Enterprise. An Oracle White Paper January 1998

Oracle Applications Release 10.7 NCA Network Performance for the Enterprise. An Oracle White Paper January 1998 Oracle Applications Release 10.7 NCA Network Performance for the Enterprise An Oracle White Paper January 1998 INTRODUCTION Oracle has quickly integrated web technologies into business applications, becoming

More information

Lab - Dual Boot - Vista & Windows XP

Lab - Dual Boot - Vista & Windows XP Lab - Dual Boot - Vista & Windows XP Brought to you by RMRoberts.com After completing this lab activity, you will be able to: Install and configure a dual boot Windows XP and Vista operating systems. Explain

More information

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either

More information

Benchmarking Microsoft SQL Server Using VMware ESX Server 3.5

Benchmarking Microsoft SQL Server Using VMware ESX Server 3.5 WHITE PAPER DATA CENTER FABRIC Benchmarking Microsoft SQL Server Using VMware ESX Server 3.5 The results of a benchmarking study performed in Brocade test labs demonstrate that SQL Server can be deployed

More information

Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER

Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER Table of Contents Capacity Management Overview.... 3 CapacityIQ Information Collection.... 3 CapacityIQ Performance Metrics.... 4

More information

Actuate Business Intelligence and Reporting Tools (BIRT)

Actuate Business Intelligence and Reporting Tools (BIRT) Product Datasheet Actuate Business Intelligence and Reporting Tools (BIRT) Eclipse s BIRT project is a flexible, open source, and 100% pure Java reporting tool for building and publishing reports against

More information

VDI Without Compromise with SimpliVity OmniStack and VMware Horizon View

VDI Without Compromise with SimpliVity OmniStack and VMware Horizon View VDI Without Compromise with SimpliVity OmniStack and VMware Horizon View Page 1 of 16 Introduction A Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end user experience and

More information

Scalability and BMC Remedy Action Request System TECHNICAL WHITE PAPER

Scalability and BMC Remedy Action Request System TECHNICAL WHITE PAPER Scalability and BMC Remedy Action Request System TECHNICAL WHITE PAPER Table of contents INTRODUCTION...1 BMC REMEDY AR SYSTEM ARCHITECTURE...2 BMC REMEDY AR SYSTEM TIER DEFINITIONS...2 > Client Tier...

More information

Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1

Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1 Performance Study Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1 VMware vsphere 4.1 One of the key benefits of virtualization is the ability to consolidate multiple applications

More information

Managing Orion Performance

Managing Orion Performance Managing Orion Performance Orion Component Overview... 1 Managing Orion Component Performance... 3 SQL Performance - Measuring and Monitoring a Production Server... 3 Determining SQL Server Performance

More information

Ready Time Observations

Ready Time Observations VMWARE PERFORMANCE STUDY VMware ESX Server 3 Ready Time Observations VMware ESX Server is a thin software layer designed to multiplex hardware resources efficiently among virtual machines running unmodified

More information

An Oracle White Paper Released Sept 2008

An Oracle White Paper Released Sept 2008 Performance and Scalability Benchmark: Siebel CRM Release 8.0 Industry Applications on HP BL460c/BL680c Servers running Microsoft Windows Server 2008 Enterprise Edition and SQL Server 2008 (x64) An Oracle

More information

Tuning Tableau Server for High Performance

Tuning Tableau Server for High Performance Tuning Tableau Server for High Performance I wanna go fast PRESENT ED BY Francois Ajenstat Alan Doerhoefer Daniel Meyer Agenda What are the things that can impact performance? Tips and tricks to improve

More information

MS SQL Performance (Tuning) Best Practices:

MS SQL Performance (Tuning) Best Practices: MS SQL Performance (Tuning) Best Practices: 1. Don t share the SQL server hardware with other services If other workloads are running on the same server where SQL Server is running, memory and other hardware

More information

Toolbox 4.3. System Requirements

Toolbox 4.3. System Requirements Toolbox 4.3 February 2015 Contents Introduction... 2 Requirements for Toolbox 4.3... 3 Toolbox Applications... 3 Installing on Multiple Computers... 3 Concurrent Loading, Importing, Processing... 4 Client...

More information

LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM

LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM Leverage Vblock Systems for Esri's ArcGIS System Table of Contents www.vce.com LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM August 2012 1 Contents Executive summary...3 The challenge...3 The solution...3

More information

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...

More information

White paper: Unlocking the potential of load testing to maximise ROI and reduce risk.

White paper: Unlocking the potential of load testing to maximise ROI and reduce risk. White paper: Unlocking the potential of load testing to maximise ROI and reduce risk. Executive Summary Load testing can be used in a range of business scenarios to deliver numerous benefits. At its core,

More information

BIRT Performance Scorecard Root Cause Analysis and Data Visualization The Path to Higher Performance

BIRT Performance Scorecard Root Cause Analysis and Data Visualization The Path to Higher Performance BIRT Performance Scorecard Root Cause Analysis and Data Visualization The Path to Higher Performance Best-in-Class Performance Management powered by Best-in-Class Business Intelligence BIRT Performance

More information

Solving Rendering Bottlenecks in Computer Animation

Solving Rendering Bottlenecks in Computer Animation Solving Rendering Bottlenecks in Computer Animation A technical overview of Violin NFS caching success in computer animation November 2010 2 Introduction Computer generated animation requires enormous

More information

Siebel & Portal Performance Testing and Tuning GCP - IT Performance Practice

Siebel & Portal Performance Testing and Tuning GCP - IT Performance Practice & Portal Performance Testing and Tuning GCP - IT Performance Practice By Zubair Syed (zubair.syed@tcs.com) April 2014 Copyright 2012 Tata Consultancy Services Limited Overview A large insurance company

More information

Hardware Guide. Hardware Guide for Dynamics NAV. Microsoft Dynamics NAV 5.0. White Paper. Version 1 (October 25, 2007)

Hardware Guide. Hardware Guide for Dynamics NAV. Microsoft Dynamics NAV 5.0. White Paper. Version 1 (October 25, 2007) Hardware Guide Microsoft Dynamics NAV 5.0 Hardware Guide for Dynamics NAV White Paper Version 1 (October 25, 2007) Acknowledgements This white paper is a collaboration of Customer Service and Support and

More information

Dragon Medical Enterprise Network Edition Technical Note: Requirements for DMENE Networks with virtual servers

Dragon Medical Enterprise Network Edition Technical Note: Requirements for DMENE Networks with virtual servers Dragon Medical Enterprise Network Edition Technical Note: Requirements for DMENE Networks with virtual servers This section includes system requirements for DMENE Network configurations that utilize virtual

More information