Neelesh Kamkolkar, Product Manager. Tableau Server 9.0 Scalability: Powering Self Service Analytics at Scale

Size: px
Start display at page:

Download "Neelesh Kamkolkar, Product Manager. Tableau Server 9.0 Scalability: Powering Self Service Analytics at Scale"

Transcription

1 Neelesh Kamkolkar, Product Manager Tableau Server 9.0 Scalability: Powering Self Service Analytics at Scale

2 2 Table of Contents Motivation...4 Background...4 Executive Summary...5 Tableau Server Powers Tableau Public...6 Dogfooding at Cloud Scale...7 New Architecture Updates...8 New Minimum Hardware Requirements...9 Performance Improvements...9 Parallel Queries...9 Query Fusion...10 Cache Server External Query Cache...10 Horizontal Scale for Data Engine Other Improvements Scalability Testing Goals Testing Approach & Methodology...13 Virtual Machines...14 Physical Machines System Saturation and Think Time...15 Little s Law Think Time Workload Mix Changes...18 New Methodology Test Workbook Examples...20

3 3 Extract Characteristics...21 Standardized Isolated Environment...22 Deployment Topology...22 Measurement & Reporting Transaction...24 Throughput...24 Saturation Throughput...24 Response Time...24 Concurrent Users...24 Results Comparing Scalability of Tableau Server 9.0 with Linearly Scaling Throughput...26 Overall Hardware Observations...26 Memory...27 Disk Throughput...28 Network Usage Core Single Machine Comparison...29 Increased Memory Requirements...31 High Availability Impact Applying Results...33 Backgrounder Considerations...33 Best Practices DIY Scale Testing TabJolt - Tooling for Scalability Testing Best Practices for Optimization In The Real World Summary... 36

4 4 Motivation Many of our customers are making a strategic choice to deliver self-service analytics at scale. It s natural for our customers (IT and business alike) to want to understand how Tableau Server scales to support all their users globally. In addition, customers want to plan ahead for capacity and hardware budget allocations to accommodate increased adoption of Tableau. As part of our Tableau 9.0 release process, we set a goal to understand how Tableau Server 9.0 compares in scalability characteristics with Tableau Server 8.3. We also wanted to understand whether Tableau Server 9.0 scaled linearly and how increased loads affected its availability. Background If you are used to traditional BI or are new to Tableau, it may help to understand some core differences with how Tableau works. Unlike traditional BI reports that are designed and developed for a limited set of requirements, Tableau visualizations are built for interactivity. Users can ask any number of questions about their data, without having to go through a traditional software development life cycle to create new visualizations. To provide self-service analytics at scale and help keep users in the flow of analysis we have built on top of existing innovative technologies for Tableau Server 9.0. With Tableau, the age-old idea of query first, visualize next is completely changed. Patented technologies, including VizQL, seamlessly combine query and visualization into one process. Users focus on their business problems and on asking questions of their data. Instead of the old way, selecting data and picking from pre-built chart types. They iteratively drag and drop dimensions, blend datasets, and create calculations on various measures. During this process, Tableau creates clear visualizations and seamlessly runs needed queries at the same time. This is a different paradigm that you should factor in as you try to understand the scalability of Tableau Server. If you come from a traditional BI world, you are probably used to load-testing static reports that meet a specific service level agreement (SLA). A static report has a fixed scope, fixed set of queries and is often optimized by a developer, one at a time, over many weeks.

5 5 Tableau visualizations, on the other hand, may regenerate or submit new queries on behalf of the user s exploratory actions. Optimizations that enable quick retrieval of data can help the user stay in the flow of analytics instead of waiting for the results of a query. In Tableau 9.0, we have invested significantly in performance in addition to many other areas that enable a user to remain in the flow of analytics. This whitepaper explains how Tableau Server 9.0 performs and scales with increasing user load across various configurations and how it compares in scalability to Tableau Server 8.3. Executive Summary Tableau 9.0 is the biggest release in the history of our company. Since November 2014, very early in the 9.0 release cycle, we started performance and scalability testing of new features as they were still being developed. We iteratively incorporated design feedback for new features into the performance and load testing for Tableau Server 9.0. There are a number of factors that can impact performance and scalability, including workbook design, server configuration, infrastructure tuning, and networking. Based on our goals and testing methodology we demonstrated that: 1. Tableau Server 9.0 is nearly linearly scalable across all scenarios tested. 2. Tableau Server 9.0 showed a 200+% improvement in throughput and significant reduction in response times compared to Tableau Server 9.0 showed increased memory and network usage compared to 8.3 With many new architectural updates in Tableau Server 9.0, we chose cluster topologies based on iterative testing for new server design and common customer scenarios. In the table below (Figure 1), each row represents a Tableau Server 9.0 cluster configuration of 1 Node - 16 Cores, 2 Node - 32 Cores, and 3 Node - 48 Cores. We observed that in various configurations Tableau Server 9.0 could support the following count of users when the system was at saturation. The table of concurrent users included below represents the number of end users accessing visualizations and interacting with them concurrently, at server saturation using Little s Law.

6 6 In our test scenarios, we assume that roughly 10% of the total end users in an organization or department are concurrently accessing and interacting with visualizations. Based on our testing and workloads, we observed that Tableau Server 9.0 can support up to 927 total users on a 16-core single machine deployment, and scales up to 2809 total users on a 48-core, 3-node cluster setup as shown in the table. Deployment Configuration 1 Node 16 Cores 2 Node 32 Cores 3 Node 48 Cores Tableau Server 9.0 Concurrent Users Tableau Server 9.0 Total Users Figure 1: Tableau Server 9.0 scalability summary In addition, we demonstrated that Tableau Server 9.0 scales nearly linearly by adding more nodes in the cluster. While in the table above we assumed a 10% user concurrency (that is, 10% of the total number of people in an organization are expected to be simultaneously viewing or interacting with visualizations), your level of user concurrency may vary. In some cases we have seen concurrency as low as 1%. In this whitepaper, we will start by providing some real-world examples of Tableau Server scalability. We will describe new changes in architecture in Tableau Server 9.0 and also our testing approach and methodologies to help you better understand Tableau Server 9.0 scalability. Lastly, we will provide some guidance on how you can apply the lessons from our experiments in your environments. Tableau Server Powers Tableau Public Tableau Server is being deployed at cloud and enterprise scales across many organizations. This includes several deployments at Tableau Software. Tableau Public is our free, premium cloud service that lets anyone publish interactive data to the web. Tableau Public supports a massive number of workbooks, authors, and real-time views. We just recently increased the data extract size from 1 million rows to 10 million rows and increased total storage to 10GB for every Tableau Public user.

7 7 With over 100,000 authors, over 450 million views, and 500,000 visualizations, Tableau Public plays a key role in allowing us to use our own products. Dogfooding at Cloud-Scale Using our own products to do our work on a daily basis is a core Tableau cultural value. Tableau Public gives us a cloud-scale test environment to test new versions of Tableau Server. As part of the product release process, we deploy Tableau Server pre-release software to Tableau Public. This enables us not only to deploy our products at large scale in a production, mission-critical environment, but also to understand, find, and fix issues related to scalability. We deployed Tableau Server 9.0 to Tableau Public in the 9.0 Beta cycle. This gave us ample opportunity to not only learn about how the new architecture is scaling in a real production situation but also helped up to find and fix issues before we released the product to corporate customers. Figure 2: Point in time view of Tableau Public usage Tableau Public has served more than 450 million impressions in its lifetime with over 27 million in just the last month. It also supports more than 100,000 authors who are creating and publishing over 500,000 visualizations to Tableau Public.

8 8 The Tableau Public configuration is similar to a corporate deployment of Tableau Server with a few exceptions. All Tableau Public users are limited to a fixed extract size of up to 10 million rows of data. Since it s an open, free platform, users on Tableau Public don t expect the same level of security when accessing public data. Additionally, Tableau Public uses a custom front-end called Author Profiles for managing workbooks instead of the Application Server (Vizportal) process. However, Tableau Public runs tens of thousands of queries every single day and while the data sizes are relatively small, they have a high degree of variability. Tableau Public, powered by Tableau Server 9.0, has been a strong testing ground for the architecture updates we made in Tableau Server 9.0. New Architecture Updates In Tableau Server 9.0, many of the new capabilities are rooted in a strong architectural foundation that extends and expands the pre-existing enterprise architecture of Tableau Server. We have added several new server processes to Tableau Server to support these new capabilities. To understand how to manage scalability with Tableau Server 9.0, it s important to get familiar with these components and understand their role. For simplification in Figure 3, we have rolled up multiple server processes into a logical architecture of higher-level service layer groups. Gateway (Reverse Proxy) User Tier Content Management Services* Visualization Services Data Provider Services API Services Storage Tier Management Tier Repository (Postgres) File Store* Cluster Controller* Coordination Service* Backgrounder Figure 3: Logical architecture for a single server node

9 9 Multiple server processes work together to provide services at various tiers. The gateway is the component that directs traffic to all server nodes. You can put an external load balancer in front of the server cluster (not shown in Figure 3) and have a gateway on every node for improved high availability. The user tier consists of content management, visualization, data provider and API services. The storage tier has the content Repository and a new File Store process. Structured relational data like metadata, permissions info and Tableau workbooks are in the Repository. The File Store process, is for user s data (Tableau data extracts) and enables data extract file redundancy across the cluster. The Management tier provides a set of services that allows a server administrator to effectively manage the cluster and ensure high availability. For details on individual server process please review the administration guide. New Minimum Hardware Requirements With the new services on server supporting new capabilities, the minimum requirement for the 64-bit server installer have gone up to 4 cores and 8GB RAM. While the minimum is 4 cores for installation, we do not recommend load or scale testing a single node server using a 4-core machine. A single 4-core server is typically for small trials and prototyping. Large enterprise deployments should consider using 16-core servers for each node. Performance Improvements Performance improvements help in providing better response times to end users and promoting company-wide usage. Performance improvements have been made across the entire analytics flow. However, there are many variables that impact performance and your results may vary depending on your situation. Below, we will cover a few of the important improvements that will help guide your deployment for both performance and scale. Parallel Queries Parallel queries are designed to enable Tableau to use the back-end databases more effectively, speeding up the users interactions with a visualization. In Tableau Server 9.0 we now look at a visualization s queries sent to the backend databases and when appropriate, de-duplicate them and issue multiple queries simultaneously.

10 10 This means that Tableau Server can have multiple connections open to your back-end database and leverage more database resources where possible. This allows compatible databases to work on queries in parallel instead of sequentially, resulting in significantly faster query results. Whether this capability benefits you specifically depends on the how your back-end databases handle parallel work presented to them. Query Fusion As the name suggests, we take multiple separate queries from a dashboard and fuse them together where possible, reducing the number of queries sent to the back-end database. This is particularly beneficial for live connections. However, if your dashboard is not generating any queries that are combinable, this optimization will not help you. Multiple Queries Fused Query Identify identical queries excluding columns returned When output columns differ by aggregation/calcs Fuse to single query with all columns necessary Figure 4: Query Fusion in Tableau Server 9.0 Cache Server External Query Cache If you have just loaded a workbook and run all the queries for the first time, in many cases, when you close and re-open the workbook the data may not have changed in the backend.

11 11 If this is characteristic of your data freshness and usage scenarios, then loading these workbooks a second time will be significantly faster for your end users. With the external query cache we save the results from previous queries for fast access by future users. The Cache Server process is powered by Redis, which is a highly scalable key value cache used by many large internet-scale providers. Application Server API Server Data Server Backgrounder Vizql Server Cache Server Abstract Query Cache DB DB Figure 5: A simplified view of how caching works The figure shows a simplified version of Cache Server interactions between processes. For simplicity, other processes that don t interact with Cache Server processes are not shown. Each process has an in-memory cache called the query cache. The server process first tries to look for what it needs in the query cache. If it doesn t find it in the in-memory cache, it tries to find what it needs in the Cache Server. If the result is in the Cache Server, it is copied to the in-memory cache and returned. If it s not in either place, the query is run on the database and the results are cached in a Cache Server as well as the in-memory cache of the process that needed it. Caches in each Cache Server are accessible by all server processes and nodes in the whole cluster.

12 12 Horizontal Scale for Data Engine New for the Tableau Server 9.0 Data Engine, which is the component responsible for loading extracts into memory and querying, is now horizontally scalable to N nodes in a cluster. Previously it was limited to 2 nodes. This also allows you to build highly scalable clusters when using Tableau extracts. Other Improvements In addition, we made many improvements across the Data Engine by adding support for parallel queries (noted above) and vectorization, improved rendering, faster extract creation, support for temp tables in the Data Server and more. There are a lot of additional new capabilities and features across the Tableau 9.0 product line. In the above section we reviewed just the key new server components and their roles. Scalability Testing Goals Early in November 2014, we set out to understand how Tableau Server 9.0 scalability characteristics compared with Tableau Server 8.3 with increasing loads. We also wanted to understand whether Tableau Server 9.0 scaled linearly and whether it bent or broke with increasing loads while maintaining an average response time of three seconds or less. It was not a goal for us to preserve consistency of the methodology, workloads and workload mixes with the previous iteration of this whitepaper. We had to iteratively update and inform all of these based on the design and architecture changes planned for the new release. For example, we focused on workloads that exercised the new features of Tableau Server 9.0, and we were realistic from a customer perspective. Given the departure from workload and changes in methodology, which we will share later in the paper, you should not compare the Tableau Server scalability results published in this whitepaper with the scalability numbers published in previous whitepapers. The variations in testing are significant enough that a direct comparison is not possible. For the purposes of this paper, and for comparing scalability with the previous release, we explicitly ran the same tests against both Tableau Server 9.0 and Tableau Server 8.3 using the same hardware. We followed the same methodology both times.

13 13 Testing Approach & Methodology A lot of the methodology we used for this paper is informed not only by commonlyused best practices but also by the design changes we were making in Tableau Server 9.0. For example, in addition to using customer-facing workbooks, we were selective about the additional workbooks we added to the test mix. This is because we needed workbooks that would represent user actions that explicitly exercise the new features being built. Holistically, there are a number of different workloads that can run on Tableau Server, from end users loading visualizations, to automatic subscriptions, extract refresh jobs and more. As part of the methodology, we have focused on the end user workloads predominantly because part of our goal was to understand how many end users the system can support at saturation. When you plan for overall capacity, you should plan and account for capacity needed to run the backgrounders in addition to the concurrent user load. Typically, you want to deploy between N/2 to N/4 number of backgrounders on a machine where N is the number of cores on a machine. Detailed guidance and consideration on backgrounders is already part of the server administration guide. In the user-facing tier, the primary processes on Tableau Server that service user requests are the VizQL Server and the new and improved Application Server. There are other processes like the API server, which only services API client requests. We have excluded that from our test methodology to stay focused on the end user workloads. The VizQL server process is CPU-bound by design and needs sufficient resources allocated to ensure proper performance and scalability.

14 14 Virtual Machines Many customers deploy Tableau Server on virtual machines and run successful scalable deployments. It is not the goal of the whitepaper to exhaustively distinguish between physical and virtual infrastructure environments or across the various virtualization platforms that are available. The level of performance and scalability you can get on a virtualization platform also depends on the configuration and tuning of the virtualization parameters for a given platform. For example, using CPU overloading on VMware ESX is not recommended for Tableau Server, because with heavier workloads, other applications may compete with Tableau Server resource needs. Instead, you should consider running Tableau Server on VMs with dedicated CPU affinity. There are virtualization platform vendor-specific whitepapers you should review for best practices for your chosen virtualization platform. A couple of examples for VMware are listed below for your consideration. Performance Best Practices for vsphere 5.5 guide Deploying Extremely Latency-Sensitive Applications in vsphere 5.5 Tableau Server 9.0 runs as a server class application on top of any virtualization platform. It requires sufficient compute resources and should be deployed with that in mind. We recommend you seek guidance from your virtualization platform vendor to perform tuning for your server deployment. Physical Machines Each physical machine deployment will vary depending on many factors. For the purposes of these experiments, we wanted to minimize variability with virtualization platforms and their specific tuning. So, we deployed Tableau Server 9.0 clusters on physical machines with homogenous hardware configuration in a network-isolated lab. For each test pass, we ran a predefined set of workloads and load mix against 16, 32, 48 cores across various cluster topologies. Through each iteration we recorded not only the key performance indicators, but also system metrics and application server metrics using JMX. For each of the runs, we correlated the data and analyzed how the system behaved under increasing user loads. At the end of each of the iterations, given the architectural changes, we reviewed the results with our architecture team to inform future testing and methodology updates. We also found and fixed scalability bugs as part of our agile development process.

15 15 We ran many experiments that informed the deployment topologies for the final tests. These experiments included studies of how server scalability is impacted with various server component interactions. We will share these results as part of this whitepaper. In all, we ran over 1000 test iterations across one topology with each of the iterations roughly taking two hours to complete. We measured and collected a variety of system metrics and application metrics during the load tests to understand how the system scaled with increasing loads while adding more workers to the cluster. System Saturation and Think Time Often, infrastructure teams will want to measure and monitor CPU on the various server processes and the machine. Typically, infrastructure teams want to allow for sufficient CPU capacity headroom for burst load. For example, 80% utilization on CPU could be a good indicator of saturation from an infrastructure point of view. However, Tableau Server 9.0 is a workhorse and requires sufficient compute capacity to do its work. It is not uncommon, at times, to see some processes in a server cluster taking up 100% of a CPU s cycles. This is by design and something the infrastructure teams should consider as part of their monitoring strategy. We measured the system saturation as a point during the load test where we attain peak throughput saturation in combination with a ceiling on average response times not exceeding three seconds. If the average latency exceeded three seconds, we ignored any further increase in throughput of clients because we wanted to take a conservative view on the reported numbers. What this means in the context of our experiments is that Tableau Server could allow more incremental user load on the system at the expense of increased latencies for new users coming on to the system. In addition, we set a goal of < 1% error rates (socket, HTTP, or other) for picking the point where we measured saturation.

16 16 Below is a summary view of the measurements we took at system saturation comparing Tableau 8.3 and Tableau 9.0. The view shows KPIs like TPS, average response time, concurrent users (using Little s Law), and error rate. Figure 6: One test scenario showing system saturation point Once we determined the throughput and response times at saturation, we then used Little s Law to extrapolate the number of concurrent users on the system.

17 17 Little s Law The goal of these tests was to determine the point at which the system reaches capacity, including the total number of users at that point. Little s Law helps us illustrate this point very well. Imagine a small coffee shop that has one barista who can only help one customer at a time. People enter, buy coffee, and leave. A basic cup of coffee is served up quickly and more complex drinks take longer. In addition, if the barista were to take additional time to review instructions on preparing a drink, then the total time for servicing the user is the time taken to review instructions plus the time required to make the drink. The end-to-end service time drives the rate at which they serve people and send them on their way. However, if the number of customers arriving exceeds the number of customers leaving, eventually the coffee shop will fill up and no one else can enter. The coffee shop is maxed out. The variables that determine the maximum number of customers in the shop at any one time are the length of time they spend there, the complexity of their drink order, and the number of workers serving them. To apply the coffee shop analogy to Tableau Server, let s imagine each barista represented a VizQL server process. The coffee is analogous to the loading of a visualization or an end user interacting with a dashboard. Then, the number of end users concurrently loading and interacting with visualizations becomes the product of the average response time and the saturated throughput. Concurrent Users = Average Response Time x Saturated Throughput You may be wondering, what could represent the CPU in this analogy? We could imagine the CPU being the hardware the barista uses to actually do the work the espresso machine, the juicer, the mixer, the coffee dispenser, etc. An espresso machine that can pour one shot at a time compared to four shots a time can have a material difference on how efficiently the barista can service customers. Think Time Often, load and performance testing teams include something called think time to their response times or load testing scenarios. While this is a realistic concept, in the context of analytics, think time can be difficult to predict.

18 18 For example, when looking at a visualization, I may quickly find what I want a very short think time. However, this may lead to a lengthy, iterative exploration of the data. This additional exploration could all be considered the end users think time. Traditional approaches have used this to mimic end user delay. In our approach, we decided to test for real concurrency and ignored adding a specific think time delay in our tests. In effect, our think time was zero. On user ramp, there are many possible models. We ramped up one user per second, with zero think time between their actions, until we reached saturation as defined above. Workload Mix Changes We started doing performance and scalability testing very early in the release cycle. Along the way, we made some key decisions that informed our approach. We wanted to use a workload mix that would ensure we exercise the new capabilities in the server and represent a realistic usage scenario including real customer workbooks. In our previous whitepaper, on the same topic, defining or classifying a workload as simple/complex/moderate proved to be challenging. It often led to subjective interpretations of what the terms meant. For example, a workbook can look visually simple, but may have complexity associated with the data required for it. This would make it a compute-intensive workbook and one that benefits significantly from the product investments we made in Tableau Server 9.0. In order to simplify and exercise new features, we created a credible workload mix across (a) a mix of realistic workbooks including customer workbooks (b) a mix of users viewing and interacting with workbooks.

19 19 The figure below shows a visual representation of the load mix so you can see how we have mixed the workloads for our tests. VizQL Server Load Ramp: one user/second 65% View 60% Browser Render 35% Interact Load Mix 40% Server Render Multiple Workbooks Figure 7: Showing the load mix used to represent multiple workload types for Tableau Server 9.0 New Methodology The workload mix departs from the simple, moderate, complex workbook notion that we used in the past whitepapers. Instead of running the server to saturation with a workbook of one type (simple, complex, moderate) and using a user mix of viewers and interactors, we wanted to make it more realistic. We introduced a pool of workbooks that range in complexity. This pool included workbooks that exercised the brand new features of Tableau Server and also customers workbooks. Depending on the workbook s design, it may use browser rendering or server rendering. Browser rendering is a capability that existed in previous versions of Tableau Server. It allows modern browsers to do some of the workbook rendering, reducing the servers workload. In cases where a workbook is very complex, for performance reasons, Tableau clients push the heavier rendering work on to the server. In response, the server does the heavy lifting and just sends back tiles that make up the visualization. This is referred to as server rendering.

20 20 Tableau visualizations can therefore use modern browser capabilities where appropriate or push heavier work to the server. The choice is dependent upon the workbook s complexity, but is transparent to the user. The updated workload mix and the new methodology of selecting from a pool of workbooks, are some of the key reasons for why you should not compare the previously published results for 8.1 with results for 9.0 in this paper. Test Workbook Examples While we cannot publish the sample workbooks we used as they include customer workbooks, here are some examples of the types dashboards in use. User interaction workloads include navigating through a Tableau Story Point, filled maps with varying layers and increasing number of marks, selection, categorical filter by index, tab switching, filter actions and more. Below is an example of a Story Point workbook used for testing. Season Name Member to death ratio (X member for 1 death) Autumn Spring Summer Winter # of climbers and # of Expeditions growth over time Annapurna I An A na n. p... Cho Oyu Dhaulagiri I Everest Kangchenjunga K K a. a... Lhotse 600 Lhotse Shar Makalu Manaslu Yalun g Kan g # MEMBERS # EXPEDITION ers map Summit to Death ratio Death trends over time Annapurna I -> Click on a peak to find more about it Kangchenjunga Number Of Member Deaths More Red means higher % of Death per 100 summitter Darker Green means safer Dhaulagiri I Lhotse Everest Cho Oyu ,000 1,500 2,000 2,500 3,000 # MEMBERS ON SUMMIT Number Of Member Deaths s 1910s 1920s 1930s 1940s 1950s 1960s 1970s 1980s 1990s 2000s 2010s Peak Name Annapurna I Annapurna I - Mid.. Dhaulagiri I Kangchenjunga Kangchenjunga So.. Lhotse Middle Makalu Yalung Kang Max. HEIGHT in Meters 8,000 to 8,850 Annapurna I - East.. Cho Oyu Everest Kangchenjunga C.. Lhotse Lhotse Shar Manaslu Figure 8: A Story Point workbook that shows climbing and accident trends in the Himalayas.

21 21 Figure 9: Number of Taxi Rides workbook with sample interactions Another test workbook shown above looks visually simple but took a long time to load in previous releases. This was due to the same query being re-run separately for each of the 4 views. This workbook was based on a taxi rides data set and the specific interactions we exercised were the following: Select Categorical Filter By Index, Tab Switching, Select November on the Calendar, Pick the Date 17th, Filter to Cash, Switch Tab. Other test workbooks were designed to test for performance under heavy loads with 1,000,000 marks showing various trending analysis. All of these workbooks were built using extracts. Extract Characteristics We chose to test with workbooks based on extracts. This eliminates any variability that a live back-end data source can bring to the tests. Realistic live connection scenarios vary significantly depending on how the databases are used and what other loads are running on the databases themselves. The extracts we used ranged in row count from 3000 rows to 93 million rows of data (~3.5GB in extract size).

22 22 In addition to workloads, there are many variables that can impact system performance and scalability. In order to manage this variability and to drive consistency among test runs, we standardized on several aspects of the test. Standardized Isolated Environment First, we standardized on the hardware. We ran these scalability tests in our performance lab on physical machines with the following specifications. Server Type Operating System CPU Memory Dell PowerEdge R620 Microsoft Windows Server 2012 R2 Standard 64 Bit 2.6 GHz 2x8 cores (16 core total), and hyper-threading enabled 64 GB Figure 10: Hardware specification of the benchmarking environment Deployment Topology Across each of the cluster nodes (workers), except the primary, we maintained the following configuration of server processes: Figure 11: Showing the server deployment topology for the scalability testing.

23 23 We scaled the workload using load generators driving the workload mix described above. During test execution, we collected system metrics, performance metrics, and application metrics using JMX. We saved the results in a relational data store. We then analyzed the results using Tableau Desktop. The figure below shows a logical but simplified view of the test execution. It s simplified only in that each cluster node does not show all the server processes running on the machine. TabJolt Load Generators Tab Jolt 1 Tab Jolt 2 Gateway Server Clusters VizQI x 2 App Server x 1 Data Engine x 1 Tab Jolt 3 Gateway VizQI x 2 App Server x 1 Data Engine x 0 Test Results Gateway VizQI x 2 App Server x 1 Data Engine x 0 Tableau Desktop Analytics Gateway VizQI x 2 App Server x 1 Data Engine x 0 Figure 12: The logical and simplified view of the test environment Each of the test iterations collected a lot of data, but before we jump into the results, let s understand some of the metrics and the definitions. Measurement & Reporting We measured a number of metrics to understand the system performance and scalability including system metrics for CPU, Memory, Disk, and performance and scalability metrics such as response times, throughput, run duration etc. To understand the data discussed in this whitepaper, let s quickly review some definitions.

24 24 Transaction A transaction is the end-user experience of loading a Tableau visualization and/or interacting with a view. For example, if you are loading a visualization, the entire set of requests (HTTP) that load the visualization represents a single transaction. The response time is measured and reported for a transaction from the client s perspective (that is from where the load is being generated). Throughput Throughput is the number of transactions per second (TPS). E.g. 5 TPS = 432,000 transactions in a 24-hour period. Tableau Public, for example, has supported a peak of 1.3M page views in a day. Saturation Throughput Saturation throughput is the number of transactions per second across all clients hitting the system when the system is in saturation. Our approach to determining saturation point is described earlier in this paper. Response Time Response time is measured as the amount of time it takes the server to respond to the end user request. Concurrent Users To understand concurrency in the context of Tableau Server, we will start by defining what concurrency is not. Many times we speak to performance teams that assume that user concurrency is defined as the number of users logged into Tableau Server. While that is a logical metric, is it not representative of concurrency in this whitepaper. Number of logged in users only measures the scalability of the Application Server process. A user login exercises a narrow path in the system and is not the same critical path that loads and interacts with a visualization which does a lot of the compute-intensive work. For Tableau Server, concurrency is defined as the number of end users that are actively loading and interacting with visualizations at a specified response time and throughput goal. This is a core metric that informs the number of users that we can support on a given system under test, at saturation. We use Little s Law to extrapolate the number of concurrent users number based on average response times and saturated throughput across our experimentation and test execution.

25 25 Given that Tableau Server s scalability is informed by how VizQL server processes are able to process and satisfy end user requests to load and interact with visualizations, we ran a series of tests to measure that. Now that we ve seen how we perform test execution, the deployment we used, and the metrics, let s review the results. Results With all the new features and architecture updates in Tableau Server 9.0, we ran several experiments to inform the following scenarios. We ran the same workload using the same methodology, on both Tableau Server 9.0 and Tableau Server 8.3, across the same topology and hardware in the same lab so we could compare scalability across the releases. Comparing Scalability of Tableau Server 9.0 with 8.3 With 16 cores or more, we see an increase in performance and scalability for Tableau Server 9.0 compared to Tableau Server 8.3. Specifically, we saw Tableau Server 9.0 scale from 927 total users on a single 16-core machine to 2809 total users on a 3-node, 48-core server cluster with average response time well under the goal of three seconds and an error rate below 1%. For a typical workload, we demonstrated that Tableau Server 9.0 saturated throughput increased from 209 TPS on a single node 16-core machine to 475 TPS on a 3-node 48-core machine. Reminding ourselves that TPS corresponds to the number of visualizations loaded and interacted with in a second, we see a nearly linearly scaling system where you can scale out by adding more worker nodes to your cluster. Topology Saturated Throughput TPS Response Time (Sec) Concurrent Users Total Users Error Rate 1 Node 16 Cores % 2 Node 32 Cores % 3 Node 48 Cores % Figure 13: Tableau Server 9.0 scalability summary

26 26 We re-ran the same tests using Tableau Server 8.3 on the same hardware, with the same methodology. We captured the results in the table below. Topology Saturated Throughput TPS Response Time (Sec) Concurrent Users Total Users Error Rate 1 Node 16 Cores % 2 Node 32 Cores % 3 Node 48 Cores % Figure 14: Tableau Server 8.3 scalability summary We observed that Tableau Server 8.3 scaled from 899 total users on a single 16- core machine compared to 927 on the same configuration for Tableau Server 9.0. In addition, Tableau Server 8.3 saturated at 2068 total users on a 3-node 48-core cluster compared to 2809 total users on Tableau Server 9.0. However, Tableau 8.3 saturated relatively quicker, at 61 TPS on a 16-core machine compared to 209 TPS for Tableau Server 9.0. In the larger scale tests we found Tableau Server 8.3 saturated at 152 TPS on a 3-node 48-core cluster, compared to 475 TPS for Tableau Server 9.0. Linearly Scaling Throughput Across all of our testing, we demonstrated that Tableau Server 9.0 throughput scales nearly linearly and is more consistent compared to Tableau Server 8.3 for the same workloads, methodologies, and infrastructure used. For those customers planning to run Tableau Server on a single 8-core machine, we ran a separate set of specific tests to inform this scenario. Please see the 8-Core Single Machine Comparison section later in this paper. Overall Hardware Observations In addition to the specific scalability observations reviewed above and how server scale was impacted with cores and horizontal scaling, we captured infrastructure metrics and observations. In the section below, we will review memory, disk and network impact for each of the above core topologies for 16, 32, and 48-core machines.

27 27 Memory Compared to Tableau Server 8.3, we observed that Tableau Server 9.0 requires between 40% more memory on a single 16-core machine to 70% more RAM on a 3-node 48-core cluster. Figure 15: RAM utilization across 16, 32, 48 core clusters Increased RAM is a result of many of the changes we presented earlier in the whitepaper and as part of your upgrade from 8.x, you should consider adding more RAM to your 9.0 systems.

28 28 Disk Throughput For the disks we used in our experiments, Tableau Server 9.0 showed reduced disk throughput over a distributed cluster. With a single machine 16-core scenario, we see a 14% increase in disk throughput consumed between 8.3 and 9.0. However, a 3-node 48-core cluster actually shows a 30% reduction in disk throughput during the load tests between server versions. In Tableau Server 9.0, we are persisting cluster state to disk, and each of the new components in Tableau Server 9.0 logs to disk. Figure 16: Disk utilization comparison Network Usage In Tableau Server 9.0, we now have several components that work together with the new distributed query cache. In addition, we also have a coordination service that maintains the state across the cluster. In comparison to 8.3, this shows relatively large increases in network chatter. However, we did not observe a significant impact from the network chatter on scalability or performance.

29 29 Figure 17: Network utilization comparison So on its own, Tableau Server 9.0 scales and performs well in spite of the increased network traffic. The takeaway for a real deployment is to consider deploying Tableau Server on 10GB networks when available. 8-Core Single Machine Comparison For customers running Tableau Server on 8-core machines, we wanted to inform how Tableau Server 9.0 would behave in comparison to 8.3 after an upgrade. We ran a battery of tests with the same methodology to compare the results across 9.0 and 8.3.

30 30 For the single 8-core machine scenario, we observed that Tableau Server 9.0 is significantly better when you look at saturated throughput and response times, when compared to 8.3, as shown in the table below. Key Indicator Tableau Server Node 8 Cores Tableau Server Node 8 Cores Saturated Throughput (TPS) Response Time (Secs) Concurrent Users Error Rate 1.71% 0.78% Figure 18: Comparing Server 8.3 and Server 9.0 on 8 core machine With Tableau Server 9.0, we observed a significantly higher saturation throughput of 130 TPS and lower average response times of 0.46 seconds with a lower error rate of <1%. With Tableau Server 8.3, we see a significantly lower throughput of 34 TPS and an average response time of 1.8 seconds. We were able to maintain the error rate goal of < 1% on Tableau Server 9.0, but Tableau Server 8.3 on a single machine with 8-cores had more errors, often related to client time-outs as the response times started to increase with load.

31 31 Increased Memory Requirements Comparing our single 8-core machine and our 16-core machines, Tableau Server 9.0 used 38% more RAM in our 16-core tests and 68% more RAM on our 8-core tests. Figure 19: RAM utilization comparison across 8.3 and 9.0 for single machine deployments In addition, in multi-machine cluster deployments, Tableau Server 9.0 utilizes about 60-80% more memory at peak usage compared to Tableau Server 8.3. We have increased the minimum hardware requirements for Tableau Server 9.0 to 8GB to accommodate the additional server processes supporting new 9.0 features. For example, each Cache Server process at startup will consume 500MB of RAM. While it s efficient at what it does, the more Cache Server processes you have on a machine, the more RAM that will be set aside. In addition, new processes like File Store consume additional RAM. These didn t exist in prior versions of Tableau Server.

32 32 What this means is, based on the specific tests in this whitepaper, if you have a single-machine 8-core Tableau Server 8.3 instance, doing an in-place upgrade to 9.0 could help your end users experience a performance boost. However, you may get slightly poorer scaling due to resource contention. Compared to 8.3, on a single machine we saw fewer errors with 9.0. The specific performance gains you see may vary depending on many factors. We hope that this helps inform your planning needs for capacity as you consider upgrading from 8.x to 9.0. We made a lot of improvements to high availability (HA) in Tableau Server 9.0. In addition to introducing new server processes like the File Store, Cluster Controller, Co-ordination Service, etc., we are doing new work to move extracts onto all of the nodes in the cluster that have a Data Engine. We wanted to test what impact, if any, the updates to HA would have on scalability. The following section covers our observations. High Availability Impact In thinking about HA and non-ha, one key thing to remember is that we added several new components to the server to support the new architecture for HA. In order to test HA, we wanted to ensure we included the new application server workload into the mix. We ran the new workload on the same hardware as before. In Tableau Server 9.0, you must have at least three machines to run an HA configuration. For more details, please read the server administration guide. In one test, we enabled HA by adding a passive repository for failover and a File Store and Data Engine processes to every node in the cluster. When compared to the non-ha deployment, when HA is enabled, we noticed a very small impact on throughput and response times. This is minor and anticipated because we are doing more work to keep the Postgres repositories and the extracts in sync. However, we see about a 10% percent increase in memory usage across all workers when running an HA configuration. The increase in memory usage did not impact the TPS significantly. We observed a <1% reduction in TPS when HA was enabled. The error rate (mostly socket-read timeouts) increased 0.1% to 0.3%, although still under our threshold of < 1% error rate. In addition, when running in HA mode, each publish action to the server (in case of extract use) will require File Store processes to synchronize the new extracts across all the nodes in the cluster. In previous versions, you could only run Data Engine processes on up to two nodes in a cluster. In Tableau Server 9.0, you can run Data Engine process on any number of nodes.

33 33 Each machine running the Data Engine process also requires a File Store process. Given this configuration possibility, you should be aware that the more nodes you set up for extract redundancy, the more costs you will incur in synchronizing the extracts across the nodes. This cost is primarily reflected in network usage and should inform your decision to deploy server workers across slow links. Applying Results At this point, you are probably wondering how this applies to you and how you can determine the capacity you need for your deployments. In this paper, we demonstrated that Tableau Server scales nearly linearly for user concurrency. One approach you could take is to leverage the guidance in this paper to find the capacity you may need and use it as a baseline. Your actual results will vary because you will not be using the same system we used for our tests in this whitepaper. Backgrounder Considerations Much of what we discussed was in the context of user-facing workloads. The most critical of those being view loading and interacting and portal interactions. The Backgrounder server process does much of the work related to extract refreshes, subscriptions and other scheduled background jobs. These jobs don t compete with capacity if you schedule them to run at off-peak hours. When this is not possible, you should plan for and add capacity needed for your backgrounders and non-user-facing workloads to run concurrently with user facing processes. Backgrounders are designed to consume an entire core s capacity because they are designed to finish the work as quickly as possible. When you run multiple backgrounders, you should consider the fact that a background server process may impact other services running on the same machine. A good best practice is to ensure that for N cores available to Tableau Server on your machine, you run between N/4 and N/2 backgrounders on that machine. While not required, you could separate out background server processes to dedicated hardware, as necessary, to isolate it s impact to the end user workloads. If you are looking to conduct your own load testing to find out how Tableau Server scales in your environment with your workloads, here are some best practices.

34 34 Best Practices DIY Scale Testing Often times, you may want to do your own scalability and load testing so you can determine how Tableau Server scales in your environment and in your workloads. When trying this on your own, here are the top four things you should factor in: 1. Don t treat Tableau Server as a black box. Tableau is designed to scale up and scale out. However, treating it as a black box may give you unexpected results because scaling Tableau Server depends on many aspects of workload, configuration, system environment, and your overall system under test load. 2. Pick the right tool for testing. Tableau Server is a workhorse and does complex and resource-intensive work. There are many tools available to drive loads on Tableau Server. While Tableau doesn t directly support any of these tools, you should pick the one that allows for the greatest ease of use and represents your production environment the closest. Another consideration is ensuring you have the appropriate expertise in tooling and in Tableau Server. 3. Select representative workbooks. Most often when we hear about performance or scale complaints, it is because the workbooks being used are not authored with best practices in mind. If a single-user test on your workbook shows a very slow response time, then you should optimize the workbook before you embark on a load-testing project. 4. When testing workbooks using live connections remember that with the introduction of parallelization in Tableau Server 9.0, you may not need as many VizQL servers as you may have deployed in your previous version of Tableau Server. Start with the new default configuration and scale up your processes incrementally. TabJolt - Tooling for Scalability Testing Tableau recently released TabJolt, a point-and-run load and performance testing tool that is designed to work easily with Tableau Server. It eliminates the need for script development and maintenance and allows for faster iteration. This tool is available as-is, for free, from github. You can learn more about it from the release blog.

Tableau Server Scalability Explained

Tableau Server Scalability Explained Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted

More information

Tableau Server 7.0 scalability

Tableau Server 7.0 scalability Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different

More information

Neelesh Kamkolkar, Product Manager. A Guide to Scaling Tableau Server for Self-Service Analytics

Neelesh Kamkolkar, Product Manager. A Guide to Scaling Tableau Server for Self-Service Analytics Neelesh Kamkolkar, Product Manager A Guide to Scaling Tableau Server for Self-Service Analytics 2 Many Tableau customers choose to deliver self-service analytics to their entire organization. They strategically

More information

Tuning Tableau Server for High Performance

Tuning Tableau Server for High Performance Tuning Tableau Server for High Performance I wanna go fast PRESENT ED BY Francois Ajenstat Alan Doerhoefer Daniel Meyer Agenda What are the things that can impact performance? Tips and tricks to improve

More information

SanDisk Lab Validation: VMware vsphere Swap-to-Host Cache on SanDisk SSDs

SanDisk Lab Validation: VMware vsphere Swap-to-Host Cache on SanDisk SSDs WHITE PAPER SanDisk Lab Validation: VMware vsphere Swap-to-Host Cache on SanDisk SSDs August 2014 Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com 2 Table of Contents

More information

Performance Optimization Guide

Performance Optimization Guide Performance Optimization Guide Publication Date: July 06, 2016 Copyright Metalogix International GmbH, 2001-2016. All Rights Reserved. This software is protected by copyright law and international treaties.

More information

BASHO DATA PLATFORM SIMPLIFIES BIG DATA, IOT, AND HYBRID CLOUD APPS

BASHO DATA PLATFORM SIMPLIFIES BIG DATA, IOT, AND HYBRID CLOUD APPS WHITEPAPER BASHO DATA PLATFORM BASHO DATA PLATFORM SIMPLIFIES BIG DATA, IOT, AND HYBRID CLOUD APPS INTRODUCTION Big Data applications and the Internet of Things (IoT) are changing and often improving our

More information

Tableau Metadata Model

Tableau Metadata Model Tableau Metadata Model Author: Marc Reuter Senior Director, Strategic Solutions, Tableau Software March 2012 p2 Most Business Intelligence platforms fall into one of two metadata camps: either model the

More information

Ready Time Observations

Ready Time Observations VMWARE PERFORMANCE STUDY VMware ESX Server 3 Ready Time Observations VMware ESX Server is a thin software layer designed to multiplex hardware resources efficiently among virtual machines running unmodified

More information

VMware vsphere Data Protection 6.1

VMware vsphere Data Protection 6.1 VMware vsphere Data Protection 6.1 Technical Overview Revised August 10, 2015 Contents Introduction... 3 Architecture... 3 Deployment and Configuration... 5 Backup... 6 Application Backup... 6 Backup Data

More information

Serving 4 million page requests an hour with Magento Enterprise

Serving 4 million page requests an hour with Magento Enterprise 1 Serving 4 million page requests an hour with Magento Enterprise Introduction In order to better understand Magento Enterprise s capacity to serve the needs of some of our larger clients, Session Digital

More information

Load Testing and Monitoring Web Applications in a Windows Environment

Load Testing and Monitoring Web Applications in a Windows Environment OpenDemand Systems, Inc. Load Testing and Monitoring Web Applications in a Windows Environment Introduction An often overlooked step in the development and deployment of Web applications on the Windows

More information

Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1

Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1 Performance Study Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1 VMware vsphere 4.1 One of the key benefits of virtualization is the ability to consolidate multiple applications

More information

Oracle Database Performance Management Best Practices Workshop. AIOUG Product Management Team Database Manageability

Oracle Database Performance Management Best Practices Workshop. AIOUG Product Management Team Database Manageability Oracle Database Performance Management Best Practices Workshop AIOUG Product Management Team Database Manageability Table of Contents Oracle DB Performance Management... 3 A. Configure SPA Quick Check...6

More information

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55% openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive

More information

Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database

Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database WHITE PAPER Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Executive

More information

An Oracle White Paper Released October 2008

An Oracle White Paper Released October 2008 Performance and Scalability Benchmark for 10,000 users: Siebel CRM Release 8.0 Industry Applications on HP BL460c Servers running Red Hat Enterprise Linux 4.0 and Oracle 10gR2 DB on HP BL680C An Oracle

More information

Scalability and Performance Report - Analyzer 2007

Scalability and Performance Report - Analyzer 2007 - Analyzer 2007 Executive Summary Strategy Companion s Analyzer 2007 is enterprise Business Intelligence (BI) software that is designed and engineered to scale to the requirements of large global deployments.

More information

Cognos Performance Troubleshooting

Cognos Performance Troubleshooting Cognos Performance Troubleshooting Presenters James Salmon Marketing Manager James.Salmon@budgetingsolutions.co.uk Andy Ellis Senior BI Consultant Andy.Ellis@budgetingsolutions.co.uk Want to ask a question?

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

An Oracle White Paper Released April 2008

An Oracle White Paper Released April 2008 Performance and Scalability Benchmark: Siebel CRM Release 8.0 Industry Applications on HP BL460c Servers running Red Hat Enterprise Linux 4.0 and Oracle 10gR2 DB on HP BL460C An Oracle White Paper Released

More information

WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE

WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE 1 W W W. F U S I ON I O.COM Table of Contents Table of Contents... 2 Executive Summary... 3 Introduction: In-Memory Meets iomemory... 4 What

More information

10 Best Practices for Application Performance Testing

10 Best Practices for Application Performance Testing Business white paper 10 Best Practices for Application Performance Testing Leveraging Agile Performance Testing for Web and Mobile Applications 10 Best Practices for Application Performance Testing Table

More information

Kronos Workforce Central on VMware Virtual Infrastructure

Kronos Workforce Central on VMware Virtual Infrastructure Kronos Workforce Central on VMware Virtual Infrastructure June 2010 VALIDATION TEST REPORT Legal Notice 2010 VMware, Inc., Kronos Incorporated. All rights reserved. VMware is a registered trademark or

More information

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides

More information

Monitoring Databases on VMware

Monitoring Databases on VMware Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition

Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition Liferay Portal Performance Benchmark Study of Liferay Portal Enterprise Edition Table of Contents Executive Summary... 3 Test Scenarios... 4 Benchmark Configuration and Methodology... 5 Environment Configuration...

More information

PIVOTAL CRM ARCHITECTURE

PIVOTAL CRM ARCHITECTURE WHITEPAPER PIVOTAL CRM ARCHITECTURE Built for Enterprise Performance and Scalability WHITEPAPER PIVOTAL CRM ARCHITECTURE 2 ABOUT Performance and scalability are important considerations in any CRM selection

More information

Veeam ONE What s New in v9?

Veeam ONE What s New in v9? Veeam ONE What s New in v9? Veeam ONE is a powerful monitoring, reporting and capacity planning tool for the Veeam backup infrastructure, VMware vsphere and Microsoft Hyper-V. It helps enable Availability

More information

SQL Server Business Intelligence on HP ProLiant DL785 Server

SQL Server Business Intelligence on HP ProLiant DL785 Server SQL Server Business Intelligence on HP ProLiant DL785 Server By Ajay Goyal www.scalabilityexperts.com Mike Fitzner Hewlett Packard www.hp.com Recommendations presented in this document should be thoroughly

More information

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Performance Analysis and Capacity Planning Whitepaper

Performance Analysis and Capacity Planning Whitepaper Performance Analysis and Capacity Planning Whitepaper Contents P E R F O R M A N C E A N A L Y S I S & Executive Summary... 3 Overview... 3 Product Architecture... 4 Test Environment... 6 Performance Test

More information

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

How To Test On The Dsms Application

How To Test On The Dsms Application Performance Test Summary Report Skills Development Management System December 2014 Performance Test report submitted to National Skill Development Corporation Version Date Name Summary of Changes 1.0 22/12/2014

More information

Oracle Hyperion Financial Management Virtualization Whitepaper

Oracle Hyperion Financial Management Virtualization Whitepaper Oracle Hyperion Financial Management Virtualization Whitepaper Oracle Hyperion Financial Management Virtualization Whitepaper TABLE OF CONTENTS Overview... 3 Benefits... 4 HFM Virtualization testing...

More information

Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010

Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010 Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010 This document is provided as-is. Information and views expressed in this document, including URL and other Internet

More information

Qlik Sense Enabling the New Enterprise

Qlik Sense Enabling the New Enterprise Technical Brief Qlik Sense Enabling the New Enterprise Generations of Business Intelligence The evolution of the BI market can be described as a series of disruptions. Each change occurred when a technology

More information

1. Comments on reviews a. Need to avoid just summarizing web page asks you for:

1. Comments on reviews a. Need to avoid just summarizing web page asks you for: 1. Comments on reviews a. Need to avoid just summarizing web page asks you for: i. A one or two sentence summary of the paper ii. A description of the problem they were trying to solve iii. A summary of

More information

WHITE PAPER. SQL Server License Reduction with PernixData FVP Software

WHITE PAPER. SQL Server License Reduction with PernixData FVP Software WHITE PAPER SQL Server License Reduction with PernixData FVP Software 1 Beyond Database Acceleration Poor storage performance continues to be the largest pain point with enterprise Database Administrators

More information

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.

More information

Cloud Based Application Architectures using Smart Computing

Cloud Based Application Architectures using Smart Computing Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products

More information

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...

More information

Recommendations for Performance Benchmarking

Recommendations for Performance Benchmarking Recommendations for Performance Benchmarking Shikhar Puri Abstract Performance benchmarking of applications is increasingly becoming essential before deployment. This paper covers recommendations and best

More information

Improving Grid Processing Efficiency through Compute-Data Confluence

Improving Grid Processing Efficiency through Compute-Data Confluence Solution Brief GemFire* Symphony* Intel Xeon processor Improving Grid Processing Efficiency through Compute-Data Confluence A benchmark report featuring GemStone Systems, Intel Corporation and Platform

More information

ROCANA WHITEPAPER How to Investigate an Infrastructure Performance Problem

ROCANA WHITEPAPER How to Investigate an Infrastructure Performance Problem ROCANA WHITEPAPER How to Investigate an Infrastructure Performance Problem INTRODUCTION As IT infrastructure has grown more complex, IT administrators and operators have struggled to retain control. Gone

More information

Informatica Data Director Performance

Informatica Data Director Performance Informatica Data Director Performance 2011 Informatica Abstract A variety of performance and stress tests are run on the Informatica Data Director to ensure performance and scalability for a wide variety

More information

SharePoint 2010 Interview Questions-Architect

SharePoint 2010 Interview Questions-Architect Basic Intro SharePoint Architecture Questions 1) What are Web Applications in SharePoint? An IIS Web site created and used by SharePoint 2010. Saying an IIS virtual server is also an acceptable answer.

More information

Accelerating Hadoop MapReduce Using an In-Memory Data Grid

Accelerating Hadoop MapReduce Using an In-Memory Data Grid Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for

More information

Esri ArcGIS Server 10 for VMware Infrastructure

Esri ArcGIS Server 10 for VMware Infrastructure Esri ArcGIS Server 10 for VMware Infrastructure October 2011 DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction... 3 Esri ArcGIS Server 10 Overview.... 3 VMware Infrastructure

More information

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad Test Run Analysis Interpretation (AI) Made Easy with OpenLoad OpenDemand Systems, Inc. Abstract / Executive Summary As Web applications and services become more complex, it becomes increasingly difficult

More information

Microsoft Dynamics NAV 2013 R2 Sizing Guidelines for Multitenant Deployments

Microsoft Dynamics NAV 2013 R2 Sizing Guidelines for Multitenant Deployments Microsoft Dynamics NAV 2013 R2 Sizing Guidelines for Multitenant Deployments February 2014 Contents Microsoft Dynamics NAV 2013 R2 3 Test deployment configurations 3 Test results 5 Microsoft Dynamics NAV

More information

Topology Aware Analytics for Elastic Cloud Services

Topology Aware Analytics for Elastic Cloud Services Topology Aware Analytics for Elastic Cloud Services athafoud@cs.ucy.ac.cy Master Thesis Presentation May 28 th 2015, Department of Computer Science, University of Cyprus In Brief.. a Tool providing Performance

More information

NetIQ Access Manager 4.1

NetIQ Access Manager 4.1 White Paper NetIQ Access Manager 4.1 Performance and Sizing Guidelines Performance, Reliability, and Scalability Testing Revisions This table outlines all the changes that have been made to this document

More information

How To Test For Elulla

How To Test For Elulla EQUELLA Whitepaper Performance Testing Carl Hoffmann Senior Technical Consultant Contents 1 EQUELLA Performance Testing 3 1.1 Introduction 3 1.2 Overview of performance testing 3 2 Why do performance testing?

More information

Tableau for the Enterprise: An Overview for IT

Tableau for the Enterprise: An Overview for IT Tableau for the Enterprise: An Overview for IT Authors: Marc Rueter, Senior Director, Strategic Solutions Ellie Fields, Senior Director, Product Marketing May 2012 p2 Introduction A new generation of business

More information

Benchmarking Hadoop & HBase on Violin

Benchmarking Hadoop & HBase on Violin Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages

More information

How to Configure a Stress Test Project for Microsoft Office SharePoint Server 2007 using Visual Studio Team Suite 2008.

How to Configure a Stress Test Project for Microsoft Office SharePoint Server 2007 using Visual Studio Team Suite 2008. How to Configure a Stress Test Project for Microsoft Office SharePoint Server 2007 using Visual Studio Team Suite 2008. 1 By Steve Smith, MVP SharePoint Server, MCT And Penny Coventry, MVP SharePoint Server,

More information

Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000

Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000 Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000 Your Data, Any Place, Any Time Executive Summary: More than ever, organizations rely on data

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

Preparing a SQL Server for EmpowerID installation

Preparing a SQL Server for EmpowerID installation Preparing a SQL Server for EmpowerID installation By: Jamis Eichenauer Last Updated: October 7, 2014 Contents Hardware preparation... 3 Software preparation... 3 SQL Server preparation... 4 Full-Text Search

More information

Managing Traditional Workloads Together with Cloud Computing Workloads

Managing Traditional Workloads Together with Cloud Computing Workloads Managing Traditional Workloads Together with Cloud Computing Workloads Table of Contents Introduction... 3 Cloud Management Challenges... 3 Re-thinking of Cloud Management Solution... 4 Teraproc Cloud

More information

Siebel & Portal Performance Testing and Tuning GCP - IT Performance Practice

Siebel & Portal Performance Testing and Tuning GCP - IT Performance Practice & Portal Performance Testing and Tuning GCP - IT Performance Practice By Zubair Syed (zubair.syed@tcs.com) April 2014 Copyright 2012 Tata Consultancy Services Limited Overview A large insurance company

More information

PEPPERDATA IN MULTI-TENANT ENVIRONMENTS

PEPPERDATA IN MULTI-TENANT ENVIRONMENTS ..................................... PEPPERDATA IN MULTI-TENANT ENVIRONMENTS technical whitepaper June 2015 SUMMARY OF WHAT S WRITTEN IN THIS DOCUMENT If you are short on time and don t want to read the

More information

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL Dr. Allon Cohen Eli Ben Namer info@sanrad.com 1 EXECUTIVE SUMMARY SANRAD VXL provides enterprise class acceleration for virtualized

More information

Directions for VMware Ready Testing for Application Software

Directions for VMware Ready Testing for Application Software Directions for VMware Ready Testing for Application Software Introduction To be awarded the VMware ready logo for your product requires a modest amount of engineering work, assuming that the pre-requisites

More information

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash

More information

MOC 20467B: Designing Business Intelligence Solutions with Microsoft SQL Server 2012

MOC 20467B: Designing Business Intelligence Solutions with Microsoft SQL Server 2012 MOC 20467B: Designing Business Intelligence Solutions with Microsoft SQL Server 2012 Course Overview This course provides students with the knowledge and skills to design business intelligence solutions

More information

VMware vrealize Automation

VMware vrealize Automation VMware vrealize Automation Reference Architecture Version 6.0 and Higher T E C H N I C A L W H I T E P A P E R Table of Contents Overview... 4 What s New... 4 Initial Deployment Recommendations... 4 General

More information

On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform

On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform Page 1 of 16 Table of Contents Table of Contents... 2 Introduction... 3 NoSQL Databases... 3 CumuLogic NoSQL Database Service...

More information

Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES

Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES Table of Contents About this Document.... 3 Introduction... 4 Baseline Existing Desktop Environment... 4 Estimate VDI Hardware Needed.... 5

More information

Monitoring, Managing and Supporting Enterprise Clouds with Oracle Enterprise Manager 12c Jan van Tiggelen, Senior Sales Consultant Oracle

Monitoring, Managing and Supporting Enterprise Clouds with Oracle Enterprise Manager 12c Jan van Tiggelen, Senior Sales Consultant Oracle Monitoring, Managing and Supporting Enterprise Clouds with Oracle Enterprise Manager 12c Jan van Tiggelen, Senior Sales Consultant Oracle Complete Cloud Lifecycle Management Optimize Plan Meter & Charge

More information

OLAP Services. MicroStrategy Products. MicroStrategy OLAP Services Delivers Economic Savings, Analytical Insight, and up to 50x Faster Performance

OLAP Services. MicroStrategy Products. MicroStrategy OLAP Services Delivers Economic Savings, Analytical Insight, and up to 50x Faster Performance OLAP Services MicroStrategy Products MicroStrategy OLAP Services Delivers Economic Savings, Analytical Insight, and up to 50x Faster Performance MicroStrategy OLAP Services brings In-memory Business Intelligence

More information

Alfresco Enterprise on Azure: Reference Architecture. September 2014

Alfresco Enterprise on Azure: Reference Architecture. September 2014 Alfresco Enterprise on Azure: Reference Architecture Page 1 of 14 Abstract Microsoft Azure provides a set of services for deploying critical enterprise workloads on its highly reliable cloud platform.

More information

Delivering a first-class cloud experience.

Delivering a first-class cloud experience. CUSTOMER CASE STUDY Delivering a first-class cloud experience. Our aim is to make the cloud easy that s our promise to our customers. We take the complexity of building a cloud away from end-users, and

More information

SAP HANA PLATFORM Top Ten Questions for Choosing In-Memory Databases. Start Here

SAP HANA PLATFORM Top Ten Questions for Choosing In-Memory Databases. Start Here PLATFORM Top Ten Questions for Choosing In-Memory Databases Start Here PLATFORM Top Ten Questions for Choosing In-Memory Databases. Are my applications accelerated without manual intervention and tuning?.

More information

Running a Workflow on a PowerCenter Grid

Running a Workflow on a PowerCenter Grid Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)

More information

Glassfish Architecture.

Glassfish Architecture. Glassfish Architecture. First part Introduction. Over time, GlassFish has evolved into a server platform that is much more than the reference implementation of the Java EE specifcations. It is now a highly

More information

SharePoint 2010 Performance and Capacity Planning Best Practices

SharePoint 2010 Performance and Capacity Planning Best Practices Information Technology Solutions SharePoint 2010 Performance and Capacity Planning Best Practices Eric Shupps SharePoint Server MVP About Information Me Technology Solutions SharePoint Server MVP President,

More information

An Oracle White Paper Released Sept 2008

An Oracle White Paper Released Sept 2008 Performance and Scalability Benchmark: Siebel CRM Release 8.0 Industry Applications on HP BL460c/BL680c Servers running Microsoft Windows Server 2008 Enterprise Edition and SQL Server 2008 (x64) An Oracle

More information

Tableau for the Enterprise: An Overview for IT

Tableau for the Enterprise: An Overview for IT Neelesh Kamkolkar, Product Manager Ellie Fields, Vice President Product Marketing Marc Rueter, Senior Director Strategic Solutions Tableau for the Enterprise: An Overview for IT Table of Contents 2 Introduction...3

More information

LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM

LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM Leverage Vblock Systems for Esri's ArcGIS System Table of Contents www.vce.com LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM August 2012 1 Contents Executive summary...3 The challenge...3 The solution...3

More information

Simplified Management With Hitachi Command Suite. By Hitachi Data Systems

Simplified Management With Hitachi Command Suite. By Hitachi Data Systems Simplified Management With Hitachi Command Suite By Hitachi Data Systems April 2015 Contents Executive Summary... 2 Introduction... 3 Hitachi Command Suite v8: Key Highlights... 4 Global Storage Virtualization

More information

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All

More information

Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER

Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform The benefits

More information

Product Brief SysTrack VMP

Product Brief SysTrack VMP for VMware View Product Brief SysTrack VMP Benefits Optimize VMware View desktop and server virtualization and terminal server projects Anticipate and handle problems in the planning stage instead of postimplementation

More information

Microsoft SharePoint Server 2010

Microsoft SharePoint Server 2010 Microsoft SharePoint Server 2010 Small Farm Performance Study Dell SharePoint Solutions Ravikanth Chaganti and Quocdat Nguyen November 2010 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY

More information

PARALLELS CLOUD SERVER

PARALLELS CLOUD SERVER PARALLELS CLOUD SERVER Performance and Scalability 1 Table of Contents Executive Summary... Error! Bookmark not defined. LAMP Stack Performance Evaluation... Error! Bookmark not defined. Background...

More information

Netwrix Auditor for Exchange

Netwrix Auditor for Exchange Netwrix Auditor for Exchange Quick-Start Guide Version: 8.0 4/22/2016 Legal Notice The information in this publication is furnished for information use only, and does not constitute a commitment from Netwrix

More information

In-Memory or Live Reporting: Which Is Better For SQL Server?

In-Memory or Live Reporting: Which Is Better For SQL Server? In-Memory or Live Reporting: Which Is Better For SQL Server? DATE: July 2011 Is in-memory or live data better when running reports from a SQL Server database? The short answer is both. Companies today

More information

The Methodology Behind the Dell SQL Server Advisor Tool

The Methodology Behind the Dell SQL Server Advisor Tool The Methodology Behind the Dell SQL Server Advisor Tool Database Solutions Engineering By Phani MV Dell Product Group October 2009 Executive Summary The Dell SQL Server Advisor is intended to perform capacity

More information

Best Practices for Web Application Load Testing

Best Practices for Web Application Load Testing Best Practices for Web Application Load Testing This paper presents load testing best practices based on 20 years of work with customers and partners. They will help you make a quick start on the road

More information

Accelerate Testing Cycles With Collaborative Performance Testing

Accelerate Testing Cycles With Collaborative Performance Testing Accelerate Testing Cycles With Collaborative Performance Testing Sachin Dhamdhere 2005 Empirix, Inc. Agenda Introduction Tools Don t Collaborate Typical vs. Collaborative Test Execution Some Collaborative

More information

TOP TEN CONSIDERATIONS

TOP TEN CONSIDERATIONS White Paper TOP TEN CONSIDERATIONS FOR CHOOSING A SERVER VIRTUALIZATION TECHNOLOGY Learn more at www.swsoft.com/virtuozzo Published: July 2006 Revised: July 2006 Table of Contents Introduction... 3 Technology

More information

SQL Anywhere 12 New Features Summary

SQL Anywhere 12 New Features Summary SQL Anywhere 12 WHITE PAPER www.sybase.com/sqlanywhere Contents: Introduction... 2 Out of Box Performance... 3 Automatic Tuning of Server Threads... 3 Column Statistics Management... 3 Improved Remote

More information

E21 Mobile Users Guide

E21 Mobile Users Guide E21 Mobile Users Guide E21 Mobile is the Mobile CRM companion to TGI s Enterprise 21 ERP software. Designed with the mobile sales force in mind, E21 Mobile provides real-time access to numerous functions

More information

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect ashutosh_shinde@hotmail.com Validating if the workload generated by the load generating tools is applied

More information

Avoiding Performance Bottlenecks in Hyper-V

Avoiding Performance Bottlenecks in Hyper-V Avoiding Performance Bottlenecks in Hyper-V Identify and eliminate capacity related performance bottlenecks in Hyper-V while placing new VMs for optimal density and performance Whitepaper by Chris Chesley

More information