Improving Perforce Performance by over 10x Tim Barrett & Shannon Mann, Research In Motion December 2007
|
|
|
- Solomon Claud Greene
- 10 years ago
- Views:
Transcription
1 Improving Perforce Performance by over 10x Tim Barrett & Shannon Mann, Research In Motion December 2007 Abstract Research In Motion (RIM) has completed a 9 month project that resulted in improving the performance of Perforce by over 10x. The Project team worked with Perforce Technical Support, peer companies and internal resources to develop and execute a comprehensive project that addressed infrastructure and application issues. The team surpassed that goal of a 75% improvement in performance and has delivered over 90% improvement in performance. This paper discusses the major and minor initiatives included in RIM s performance project along with key ways of measuring performance. Introduction Research In Motion (RIM) is a world leader in the mobile communications market and has a history of developing breakthrough wireless solutions. RIM's portfolio of award-winning products, services and embedded technologies are used by thousands of organizations around the world and include the BlackBerry(tm) wireless platform, the RIM Wireless Handheld(tm) product line, software development tools, radio-modems and software/hardware licensing agreements. RIM has a large and rapidly growing Perforce server. In late 2006, the performance of Perforce was a growing concern across the Software organization and a Project Team was assembled with the responsibility to fix all performance issues related to Perforce. Starting Infrastructure In January 2007, RIM was operating the following configuration: Hardware: SunFire V890with 8 dual SPARC IV (1.35Ghz), RAM = 32GB (12% file cache) OS: Solaris 9 Software: Perforce server , multiple GUI levels (mostly versions of P4WIN) # of Users: 1,950 Baseline Performance Before the project began, performance was measured based on the number of processes executing at a given time. Monitoring tools were used to count the number of processes executing every 5 minutes. These tools highlighted performance issues after they had affected the main server. Table 1: Number of processes executing at a time in Perforce. Measured Mon Fri, 10:00am 5:00pm. Average Median Maximum Minimum Two week Average
2 Diagram 1: Number of processes executing at a time in Perforce. Measured every 5 minutes on il 26, Number of Processes Executing # of processes :03 0:58 1:53 2:48 3:43 4:38 5:33 6:28 7:23 8:18 9:14 10:08 11:03 11:58 12:53 13:48 14:43 15:38 16:33 17:28 18:23 19:18 20:13 21:08 22:03 22:58 23:53 Time (HH:MM) The Project Team developed a method to measure server response times that was more indicative of a developer experience. The team built a script to execute 5 processes every 15 minutes. The lapse time from the start of the first process to the end of the last process was recorded and tracked. This method of measuring performance still highlighted issues after they had affected the main server, but, it also provided a method to measure the scale of the impact. The 5 processes executed were: o p4 info > /dev/null o p4 print -q //depot/swdocs/pub/hasty_guides/usage.pdf > /dev/null o p4 fstat //depot/swdocs/pub/hasty_guides/usage.pdf > /dev/null o p4 dirs //depot/vendor/* > /dev/null o p4 sync -f //depot/admin/announcements/... > /dev/null Table 2: Average weekly lapse times of the 5 test processes. Measured from 9:00am to 6:00pm EST, Mon-Fri Average Median Maximum Minimum s 1.4s 49.0s 0.9s s 3.2s 89.1s 0.9s s 5.0s 209.6s 0.7s s 5.9s 123.2s 0.7s Monthly Average 12.2s 4.1s 125.3s 0.8s Diagram 2: Individual lapse times of the 5 test processes for the week of il 23 27
3 Lapse Time of 5 Sampe Processes 0:02:01 0:01:44 Lpase time (H:MM:SS) 0:01:26 0:01:09 0:00:52 0:00:35 0:00:17 0:00:00 9:00 9:30 10:00 10:30 11:00 11:30 12:00 12:30 13:00 Time (HH:MM) 13:30 14:00 14:30 15:00 15:30 16:00 16:30 17:00 17:30 18:00 Performance Targets While the mandate for the Project Team was clear, Improve performance for the developers, the team needed concrete performance targets. After considering the scale and distribution of the historical performance issues, the targets were set as: Deliver a 75% improvement in performance measured using the average lapse times of the 5 test processes Monday to Friday, 9:00am to 6:00pm EST. In other words, the average lapse times of the 5 test processes must be below 4 seconds, adjusted for growth, in order for the project to be considered a success. Project Priorities The Project Team engaged Perforce Technical Support early in the project life cycle to help identify potential sources of improvement. In addition, the Project Team engaged other large users of Perforce looking for solutions implemented at their organizations. The expectation of the Project Team was that, while RIM was a unique organization, the Perforce performance problems were not unique and were experienced by other organizations. The Project Team decided to take the early direction from other people who experienced and dealt with the performance issues while investigating our own environment for unique issues. The list of potential solutions was: 1. Stop commands that are known to cause problems 2. Upgrade RAM from 32GB to 64GB 3. Upgrade server version from to Move from Sparc/Solaris to Opteron/Linux 5. Move from SAN storage to DAS storage 6. Optimize the Protection table 7. Upgrade Client GUI s to current level 8. Implement a RamSan for the metadata
4 The Project Team set priorities based on expected improvement and elapsed time required to implement each solution. Performance Analysis The causes of performance issues with Perforce is well known and has been discussed at previous user conferences. The RIM Project Team spent time throughout the project to understand the nature of the database locking problem in order to be able to duplicate the problem in a test environment. The nature of the problem is the inter-table locking that occurs between read commands and write commands. The problem is not related to a single command, but the interplay of multiple commands. When a command is reading information from a table, it holds a shared lock which allows other reads to also access the table. If a command needs to write to a table, the command holds an exclusive lock so that no other command can access the table at the same time. The database locking problem occurs when a long read command is followed by a write command. The read command places a shared lock on the table. The write command must wait for the read to finish before it can have its exclusive lock on the table. Since Perforce requests locks for commands in the order they are received, all other commands must wait for both the read command to finish and the write command to finish. The example below depicts the locking issue with one table. However, Perforce has over 40 tables and most commands utilize multiple tables, some commands requiring all locks before proceeding. In a worst case scenario, a long read command blocks a write command that is holding exclusive locks on multiple tables. This combination of locks is the essence of the crosstable lockjam problem that can effectively turn Perforce into a single-threaded application. Diagram 5: Table locking issue with Perforce TIME LEGEND Process is holding a read lock Process is holding a write lock Process is waiting to do a read Process is waiting to do a write The Project Team created a test scenario that duplicates the table locking issue. This test has enabled the Project Team to duplicate the major performance problem RIM experiences to determine if changes address the issue. The test developed by RIM is: Execute 10 submits of 10,000 files each in parallel At the same time, execute a spinning integrate of 10,000 files An at the same time, execute a spinning FSTAT of all files Major Initiatives: Memory Upgrade
5 RIM was operating servers with 32GB s of RAM. Based on advice from Perforce and from another peer company, it was determined that RIM required at least 64GB of RAM. The calculation used by Perforce was: 1. RAM should be 32MB per user or 2. RAM should be 1.5KB per file The Project Team executed a series of tests, first with 32GB of RAM, followed by 64GB of RAM. In addition, different levels of file cache (setting segmapsize from 12% to 50%) was tested to determine if any performance improvement existed with higher file cache amounts. The testing results showed an average speed improvement of 11% with 64GB/segmapsize =50% and that file cache changes did make a big difference in performance. The memory was upgraded in Production on il 29, The performance improvement in production was immediately apparent during the early morning hours when little activity occurred on the Perforce server. The average lapse times from 2:30am to 7:00am dropped from 1.2seconds to 1.1seconds, an improvement of 8%. Minimum lapse times of the 5 test processes also dropped from 0.73 seconds to 0.55 seconds, an improvement of 24%. Unfortunately, the user count increased significantly at the same time and the improvement was not enough to compensate for the increase in volume. Diagram 4: Minimum lapse times of the 5 test processes from2:30am to 7:30am before and after the memory upgrade Min Lapse Times of 5 Test Processes 2:30am - 7:30am EST 00: : :00.69 Lapse Times (H:MM:SS) 00: : : : : : :00.09 Memory Upgrade 00: Date Major Initiatives: Server upgrade from to RIM had started experiencing performance issues shortly after upgrading to server version in The suspicion was that the degradation of performance related to the upgrade of the server version along with other RIM incurred changes within our development process. Again, on the advice from Perforce, the Project team evaluated the server version
6 The major performance improvements included in the upgrade were changes to p4 integrate (version ) and p4 submit (version ). In a QA environment, a series of commands were executed multiple times and durations were analyzed to compare the speed of the two server versions. Of the nine types of commands tested, 6 were noticeably faster, but three commands were actually slower with the new version of the server software. The Project team worked with Perforce and was unable to determine why these three commands executed consistently slower with version Table 3: Average durations in QA to compare lapse times of the server version and Version Version Variance DIRS 329s 64s 80.5% FILES 165s 161s 2.4% INTEGED 4347s 4121s 5.2% LABELS 95s 85s 10.5% SUBMIT 7335s 2246s 69.4% SYNC 850s 666s 21.6% CHANGES 176s 185s -4.9% FSTAT 252s 267s -5.7% OPENED 38s 44s -14.1% The upgrade in production was completed on 20,2007 and the result was immediately noticed by the Developers. Locking issues that used to affect the main depot for over 5 minutes at a time were eliminated with the most significant issues locking the main server for no more than 2 minutes. Below are two charts that track the daytime lapse time of the 5 test processes the week before the upgrade and the second week after the upgrade Diagram 4: Individual lapse times of the 5 test process before the server upgrade ( 14, , 2007) 02: : : : : : : : :00.0 9:00 9:30 10:00 10:30 11:00 11:30 12:00 12:30 13:00 13:30 14:00 14:30 15:00 15:30 16:00 16:30 17:00 17:30 18:00 Diagram 5: Individual lapse times of the 5 test processes after the server upgrade ( 28, June 1, 2007)
7 02: : : : : : June 1 00: : : :00 9:30 10:00 10:30 11:00 11:30 12:00 12:30 13:00 13:30 14:00 14:30 15:00 15:30 16:00 16:30 17:00 17:30 18:00 Major Initiative: Opteron Update RIM was utilizing SunFire V890 servers with Solaris 9 and approached the recommendation by Perforce to switch to Opteron/Linux with suspicion. At the 2007 User Conference in Las Vegas, the Project Team confirmed with many other large users of Perforce that Opteron/Linux combination was superior to Sparc/Solaris combination in terms of Perforce performance. The Project Team proceeded to compare the following hardware/os configurations: Current Hardware: SunFire V890 with 8 dual SPARC IV (1.35 Ghz) CPU, 64GB RAM, Solaris 9 New Hardware: SunFire x4600, 8xAMD Opteron model 8220 processor (2.86Ghz dual-core), 128 GB RAM, Linux (RHEL4, update5) The test results in the QA environment confirmed the information the Project Team had received from Perforce and other large users of Perforce. In the QA environment, Opterons/Linux outperformed Sparc/Solaris and the performance improvements ranged from 62% to 98%. Table 4: Lapse times from Sparc/Solaris vs Opteron/Linux tests Test Type Sparc/Solaris Opteron/Linux Improvement Lock Test: Created a program to create 20 4m6s 39s 84% files, cycle through locking them 2,000,000 times Populate Test: Created script to create 2h14m51s 38m8s 72% 100,000 files (6.4K each). Submitted all files in one changelist Integrate Test: Integrate files submitted in 29m54s 31s 98% populate test into a different directory Submit Integrate Test: Submit changelist 1m3s 17s 73% created in integrate test Rebuild from Checkpoint 4h40m 1h45m 62% Checkpoint Test 8h31m 1h20m 84% Average Improvement 79%
8 The new Opteron servers were implemented into production on Sept 23, 2007 and the result was immediate. For the first time since the project began, Perforce performance was no longer an issue and positive comments were received from many different users. The RIM Perforce environment was able to effectively handle the high volume of traffic without incurring any lengthy lock problems. Below are the average lapse times of the 5 sample processes for the 5 weeks prior and 5 weeks after the upgrade to Opteron/Linux. Diagram 6: Average weekly daytime duration of the 5 test processes Average lapse time of 5 test processes M-F 09:00-18:00 00: :30.2 Lapse Time (MM:SS.0) 00: : : : : : :00.0 JUL 30-3 AUG 7-10 AUG AUG AUG SEP 4-7 SEP SEP SEP OCT 1-5 OCT 9-12 OCT OCT Average Lapse Time of 5 sample processess In addition, the increased speed has allowed a higher number of processes to be executed on a daily basis. Average number of processes completed during a typical business day increased by an average of 8% after the switch to the Opteron CPUs. Major Initiative: Read-only Replica Volume of activity on the main Perforce Server at RIM is a growing concern. RIM is adding a significant number of users every few months and the trend is expected to continue into the near future. In addition, explosion of products and new processes has increased the number of builds performed on a regular basis. The Project Team investigated Perforce usage and determined that various build activity executed daily accounted for over 60% of the volume on Perforce. In fact, the top 15 accounts (all build activity) accounted for over 98% of the daily p4 sync commands. The p4 sync command is a top suspect for creating the perfect environment for the database locking issue with Perforce describe in section Performance Analysis (Diagram 3). Based on advice from a number of large users of Perforce, RIM investigated implementing a read-only replica of the main server to offload all build activity. RIM tested a solution from Perforce (p4 jrep), along with advice from a peer company, and set up a replica server. The server utilizes the p4 jrep script to replay the journal file to the read-only replica. RIM modified the script to avoid copying key tables to the mirror that were needed to control the use of the mirror (such as the DB.HAVE table).
9 RIM is now in the process of moving all build activity from the main server to the read-only replica server. Major Initiative: RamSan Solid State Disk At the 2007 user conference, one company reported success with a RamSan unit for addressing performance concerns for large Perforce customers. Another peer company installed similar technology in the summer of 2007 and also reported significant gains in performance. This information prompted the Project Team to evaluate a RamSan Solid State disk (128GB) unit. The strategy was to test the unit against local disk and our production array. A series of low-level tests were compiled along with the cross-table lockjam test described in the Performance Analysis section of this document. The end result is that, while the RamSan solid state disk showed measurable improvement in the low-level testing, the real world application testing did not show any measurable improvement. It is the opinion of the Project Team that our environment is currently optimized and that the current disk access speed is not a bottleneck. The Project Team has plans to implement a RamSan unit when volumes increase to the point where performance begins to suffer. Table 5: RamSan Low-Level test results Local Disk (SCSI Raid SUNSAN Array 9985 RamSan 128GB O) #DD (Disk Dump) s s s Copy of 24GB file s s s #IOTMS avg IOPS 620 iops 1261 iops 7288 iops #Locktest 9.092s 9.350s 8.178s Table 6: RamSan Real-world testing SUNSAN Array 9985 RamSan 128GB RamSan 128GB with cache Local Disk Local Disk with cache Serial Add 46.21s 43.85s 96.07s 52.25s 49.96s Parrallel Add s s s s s Avg FSTAT 2.8s 2.95s 2.56s 3.1s 3.05s Total s s s s s Minor Initiatives: The Project Team also completed many minor initiatives to address the performance concerns. Kill Orphaned processes RIM Perforce server had the number of orphaned processes grow every day. These processes were complete from the GUI perspective, but, not from the server perspective and they muddied the performance monitoring results. The Project team created a script that would kill any process that had been executing for over 2 hours. It was not proven if this action improved performance of the main server. Rebuild from Checkpoints The Project Team completed a series of Rebuild from Checkpoints to rebuild the indexes and improve performance. While RIM was operating server version , the occasional Rebuild from Checkpoint provided temporary improvements lasting 2 to 5 business days. However, after
10 RIM moved to server version , no improvement was measured on subsequent Rebuilds from Checkpoints. Optimize Protection Table RIM had a protection table that included a number of exclusion entries and wildcards. After the Perforce Conference in Las Vegas, 2007, the Project Team used the information from Michael Shields presentation to optimize the protection table. The improvements were done a few at a time over the period of two months which made measuring performance improvements from the changes impossible. Log Parser To thoroughly understand and manage the use of Perforce, the Project Team developed a log parser along with a reporting database. The Log Parser allows the capture of all relevant information from the log files for reporting and analysis. The database is now used to investigate performance situations, report statistics and monitor user behaviours. Upgrade GUI versions The Project Team utilized the new log parser to determine which users were executing old GUI versions. It was determined that over 60% of the users were using versions below our recommended level. Perforce has stated that some of these old versions had known performance problems and were causing some of our issues. The Project Team began a process of identifying users with old GUI versions and worked with these Users to upgrade to the recommended GUI version. An install package was created that allowed Users to easily upgrade Perforce. This package prevents Users from installing P4 EXP and also set key options within Perforce to help optimize the performance. Limit the scope of Clientspecs The Project Team determined through various analysis techniques that there were instances of Users syncing the entire depot that impacted performance for all of the Users. At these times, the commands placed very long read locks on key tables in the metadata and effectively blocked the rest of the users for up to 20 minutes. The Project Team developed a trigger that now prevents users from mapping the entire depot at one time. Users are now forced to select down at least one layer from the top of the tree to reduce the accidental syncing of the entire depot. Since this trigger has been in place, there have been no instances of the depot being locked for 20 minutes at a time and the trigger prevented 5 Users from syncing the entire depot in just the first few weeks. Local Proxy Investigation One team was utilizing a local proxy to improve performance for their group. The Project Team investigated the local proxy and was able to prove that the local proxy not only did not help performance, it actually slowed down performance for the group using the proxy. The only real benefit from the proxy was isolating the activity of these Users from some of the previous volume reporting tools. The group has now stopped using a local proxy. Proxies are still used by all remote locations using Perforce. Reducing FSTAT Volume is one of the factors which affect performance. The Project Team analyzed the type of activity looking for excessive use of the depot. Through the analysis, it was determined that some users were polling the depot for changes every second. This meant an excessive amount of FSTAT commands that provide the right conditions for the database locking issue. The Project Team worked with users to change polling to more appropriate times to reduce the load due to the FSTAT command. In addition, changes were implemented into our LAN desk install package to turn off the automatic polling option. Divide the Depot
11 The Project team investigated and started the process to split our depot along functional lines. The process involved taking two exact copies of the depot and obliterating alternate part of both depots. The test obliteration process lasted over 15 consecutive days and eventually corrupted the test depot. This option was abandoned for numerous reasons: 1. It was not clear that this alternative would solve all of the performance issues 2. Key functionality and re-use of code would be lost 3. Process of dividing the depot was not stable or efficient 4. Interfacing tools were not able to support multiple depots Archive Old Version Files The Project Team investigated archiving files older than a key date in the past. The assumption was if the size of the depot was smaller, performance would be improved. At the 2007 User Conference, this solution was discussed with Perforce and other companies and Perforce expressed interest in addressing the archiving of older files as part of their solution. Therefore, this option was placed on hold to allow time for Perforce to propose a solution. Conclusion RIM has successfully addressed the performance concerns with Perforce with the average lapse time of the 5 test processes averaging below 3 seconds (a 90% improvement). At the same time, the Perforce user base has continued to expand and volume has increased. With the ever growing volume, the work required to keep Perforce operating at top speed will never be complete. The Project team is already looking at new initiatives that are in various stages of development that will provide further improvements. The most important measure for the success of the performance project are the comments received directly from the Developers. Below are a few quotes received after the latest changes which clearly show the success of the performance improvements: Wow, the depot is fast now. I think you re going to have a lot of happy developers - this one included I ve never seen Perforce perform this well! Great job guys! I was just using Perforce and... it s fast. Really fast. I love it.
How To Fix A Powerline From Disaster To Powerline
Perforce Backup Strategy & Disaster Recovery at National Instruments Steven Lysohir 1 Why This Topic? Case study on large Perforce installation Something for smaller sites to ponder as they grow Stress
Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices
Sawmill Log Analyzer Best Practices!! Page 1 of 6 Sawmill Log Analyzer Best Practices! Sawmill Log Analyzer Best Practices!! Page 2 of 6 This document describes best practices for the Sawmill universal
Perforce Backup Strategy & Disaster Recovery at National Instruments
Perforce Backup Strategy & Disaster Recovery at National Instruments Steven Lysohir National Instruments Perforce User Conference April 2005-1 - Contents 1. Introduction 2. Development Environment 3. Architecture
Distribution One Server Requirements
Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and
Perforce with Network Appliance Storage
Perforce with Network Appliance Storage Perforce User Conference 2001 Richard Geiger Introduction What is Network Attached storage? Can Perforce run with Network Attached storage? Why would I want to run
Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7
Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:
XenDesktop 7 Database Sizing
XenDesktop 7 Database Sizing Contents Disclaimer... 3 Overview... 3 High Level Considerations... 3 Site Database... 3 Impact of failure... 4 Monitoring Database... 4 Impact of failure... 4 Configuration
Managing your Domino Clusters
Managing your Domino Clusters Kathleen McGivney President and chief technologist, Sakura Consulting www.sakuraconsulting.com Paul Mooney Senior Technical Architect, Bluewave Technology www.bluewave.ie
Embedded Operating Systems in a Point of Sale Environment. White Paper
Embedded Operating Systems in a Point of Sale Environment White Paper December 2008 Contents Embedded Operating Systems in a POS Environment... 3 Overview... 3 POS Operating Systems... 3 Operating Systems
Terminal Server Software and Hardware Requirements. Terminal Server. Software and Hardware Requirements. Datacolor Match Pigment Datacolor Tools
Terminal Server Software and Hardware Requirements Datacolor Match Pigment Datacolor Tools January 21, 2011 Page 1 of 8 Introduction This document will provide preliminary information about the both the
Configuring Apache Derby for Performance and Durability Olav Sandstå
Configuring Apache Derby for Performance and Durability Olav Sandstå Database Technology Group Sun Microsystems Trondheim, Norway Overview Background > Transactions, Failure Classes, Derby Architecture
Initial Hardware Estimation Guidelines. AgilePoint BPMS v5.0 SP1
Initial Hardware Estimation Guidelines Document Revision r5.2.3 November 2011 Contents 2 Contents Preface...3 Disclaimer of Warranty...3 Copyright...3 Trademarks...3 Government Rights Legend...3 Virus-free
High Availability and Disaster Recovery Solutions for Perforce
High Availability and Disaster Recovery Solutions for Perforce This paper provides strategies for achieving high Perforce server availability and minimizing data loss in the event of a disaster. Perforce
Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0
Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without
SQL Server Business Intelligence on HP ProLiant DL785 Server
SQL Server Business Intelligence on HP ProLiant DL785 Server By Ajay Goyal www.scalabilityexperts.com Mike Fitzner Hewlett Packard www.hp.com Recommendations presented in this document should be thoroughly
Tableau Server Scalability Explained
Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted
I. General Database Server Performance Information. Knowledge Base Article. Database Server Performance Best Practices Guide
Knowledge Base Article Database Server Performance Best Practices Guide Article ID: NA-0500-0025 Publish Date: 23 Mar 2015 Article Status: Article Type: Required Action: Approved General Product Technical
Storage and SQL Server capacity planning and configuration (SharePoint...
1 of 22 5/1/2011 5:34 PM Storage and SQL Server capacity planning and configuration (SharePoint Server 2010) Updated: January 20, 2011 This article describes how to plan for and configure the storage and
Parallels Virtuozzo Containers
Parallels Virtuozzo Containers White Paper Virtual Desktop Infrastructure www.parallels.com Version 1.0 Table of Contents Table of Contents... 2 Enterprise Desktop Computing Challenges... 3 What is Virtual
my forecasted needs. The constraint of asymmetrical processing was offset two ways. The first was by configuring the SAN and all hosts to utilize
1) Disk performance When factoring in disk performance, one of the larger impacts on a VM is determined by the type of disk you opt to use for your VMs in Hyper-v manager/scvmm such as fixed vs dynamic.
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
Hard Disk Drive vs. Kingston SSDNow V+ 200 Series 240GB: Comparative Test
Hard Disk Drive vs. Kingston Now V+ 200 Series 240GB: Comparative Test Contents Hard Disk Drive vs. Kingston Now V+ 200 Series 240GB: Comparative Test... 1 Hard Disk Drive vs. Solid State Drive: Comparative
IPRO ecapture Performance Report using BlueArc Titan Network Storage System
IPRO ecapture Performance Report using BlueArc Titan Network Storage System Allen Yen, BlueArc Corp Jesse Abrahams, IPRO Tech, Inc Introduction IPRO ecapture is an e-discovery application designed to handle
IBM ^ xseries ServeRAID Technology
IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation
Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly
VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Legal Notices... 2. Introduction... 3
HP Asset Manager Asset Manager 5.10 Sizing Guide Using the Oracle Database Server, or IBM DB2 Database Server, or Microsoft SQL Server Legal Notices... 2 Introduction... 3 Asset Manager Architecture...
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Enhancing SQL Server Performance
Enhancing SQL Server Performance Bradley Ball, Jason Strate and Roger Wolter In the ever-evolving data world, improving database performance is a constant challenge for administrators. End user satisfaction
Configuring Apache Derby for Performance and Durability Olav Sandstå
Configuring Apache Derby for Performance and Durability Olav Sandstå Sun Microsystems Trondheim, Norway Agenda Apache Derby introduction Performance and durability Performance tips Open source database
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Network (RHN) Satellite server is an easy-to-use, advanced systems management platform
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment
Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Satellite server is an easy-to-use, advanced systems management platform for your Linux infrastructure.
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
Qsan Document - White Paper. Performance Monitor Case Studies
Qsan Document - White Paper Performance Monitor Case Studies Version 1.0 November 2014 Copyright Copyright@2004~2014, Qsan Technology, Inc. All rights reserved. No part of this document may be reproduced
vrealize Business System Requirements Guide
vrealize Business System Requirements Guide vrealize Business Advanced and Enterprise 8.2.1 This document supports the version of each product listed and supports all subsequent versions until the document
EMC Invista: The Easy to Use Storage Manager
EMC s Invista SAN Virtualization System Tested Feb. 2006 Page 1 of 13 EMC Invista: The Easy to Use Storage Manager Invista delivers centrally managed LUN Virtualization, Data Mobility, and Copy Services
Real Time Replication in the Real World
Real Time Replication in the Real World Richard E. Baum and C. Thomas Tyler Perforce Software, Inc. Doc Revision 1.1 Table of Contents 1 Overview... 1 2 Definitions... 1 3 Solutions Involving Replication...
Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server
Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance
Guideline for stresstest Page 1 of 6. Stress test
Guideline for stresstest Page 1 of 6 Stress test Objective: Show unacceptable problems with high parallel load. Crash, wrong processing, slow processing. Test Procedure: Run test cases with maximum number
SAGE 500 PRODUCT ROADMAP
Sage 500 Product Roadmap SAGE 500 PRODUCT ROADMAP Sage Summit 2014 was a success! For those of you who were unable to join us in July, Sage announced continued development and enhancements to future releases
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Scalability Factors of JMeter In Performance Testing Projects
Scalability Factors of JMeter In Performance Testing Projects Title Scalability Factors for JMeter In Performance Testing Projects Conference STEP-IN Conference Performance Testing 2008, PUNE Author(s)
Microsoft Exchange Server 2003 Deployment Considerations
Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers
System Requirements and Configuration Options
System Requirements and Configuration Options Software: CrimeView Community, CrimeView Web System requirements and configurations are outlined below for CrimeView Web and CrimeView Community (including
How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router
HyperQ Hybrid Flash Storage Made Easy White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com [email protected] [email protected]
<Insert Picture Here> An Experimental Model to Analyze OpenMP Applications for System Utilization
An Experimental Model to Analyze OpenMP Applications for System Utilization Mark Woodyard Principal Software Engineer 1 The following is an overview of a research project. It is intended
VDI Solutions - Advantages of Virtual Desktop Infrastructure
VDI s Fatal Flaw V3 Solves the Latency Bottleneck A V3 Systems White Paper Table of Contents Executive Summary... 2 Section 1: Traditional VDI vs. V3 Systems VDI... 3 1a) Components of a Traditional VDI
Sage CRM Technical Specification
Sage CRM Technical Specification Client Software This document outlines the recommended minimum software and hardware requirements for running Sage CRM. Please note that while the document refers to Sage
Minimum Software and Hardware Requirements
Version 5 Jet and Version 5 SQL Minimum Software and Hardware Requirements Please note that these minimum requirements are designed to give adequate performance when running a simple database on the local
Copyright www.agileload.com 1
Copyright www.agileload.com 1 INTRODUCTION Performance testing is a complex activity where dozens of factors contribute to its success and effective usage of all those factors is necessary to get the accurate
Solid State Storage in Massive Data Environments Erik Eyberg
Solid State Storage in Massive Data Environments Erik Eyberg Senior Analyst Texas Memory Systems, Inc. Agenda Taxonomy Performance Considerations Reliability Considerations Q&A Solid State Storage Taxonomy
Deep Dive: Maximizing EC2 & EBS Performance
Deep Dive: Maximizing EC2 & EBS Performance Tom Maddox, Solutions Architect 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved What we ll cover Amazon EBS overview Volumes Snapshots
Disk-to-Disk Backup & Restore Application Note
Disk-to-Disk Backup & Restore Application Note All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc., which are subject to change from
How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)
Scalability Results Select the right hardware configuration for your organization to optimize performance Table of Contents Introduction... 1 Scalability... 2 Definition... 2 CPU and Memory Usage... 2
Throughput Capacity Planning and Application Saturation
Throughput Capacity Planning and Application Saturation Alfred J. Barchi [email protected] http://www.ajbinc.net/ Introduction Applications have a tendency to be used more heavily by users over time, as the
Use of Hadoop File System for Nuclear Physics Analyses in STAR
1 Use of Hadoop File System for Nuclear Physics Analyses in STAR EVAN SANGALINE UC DAVIS Motivations 2 Data storage a key component of analysis requirements Transmission and storage across diverse resources
Monitoring Databases on VMware
Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com
A High Performance, High Reliability Perforce Server
A High Performance, High Reliability Perforce Server Shiv Sikand, IC Manage Inc. Marc Lewert, Angela Thomas, TiVo Inc. Perforce User Conference, Las Vegas, April 2005 1 Abstract TiVo has been able to improve
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
Understanding Flash SSD Performance
Understanding Flash SSD Performance Douglas Dumitru CTO EasyCo LLC August 16, 2007 DRAFT Flash based Solid State Drives are quickly becoming popular in a wide variety of applications. Most people think
Recommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
Surround SCM Best Practices
Surround SCM Best Practices This document addresses some of the common activities in Surround SCM and offers best practices for each. These best practices are designed with Surround SCM users in mind,
Disk-to-Disk Backup and Restore Solution
Disk-to-Disk Backup and Restore Solution Using Computer Associates ARCserve 11 and Adaptec FS4500 Fibre-to-SATA RAID Contents Introduction.....................................1 The Solution: The Adaptec
Deploying and Optimizing SQL Server for Virtual Machines
Deploying and Optimizing SQL Server for Virtual Machines Deploying and Optimizing SQL Server for Virtual Machines Much has been written over the years regarding best practices for deploying Microsoft SQL
StarWind iscsi SAN: Configuring Global Deduplication May 2012
StarWind iscsi SAN: Configuring Global Deduplication May 2012 TRADEMARKS StarWind, StarWind Software, and the StarWind and StarWind Software logos are trademarks of StarWind Software that may be registered
EMC XtremSF: Delivering Next Generation Performance for Oracle Database
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
University of Edinburgh. Performance audit. Date: 01-07-2015. Niels van Klaveren Kasper van der Leeden Yvette Vermeer
University of Edinburgh Performance audit Date: 01-07-2015 By: Niels van Klaveren Kasper van der Leeden Yvette Vermeer Contents Summary... 3 Background... 4 Why... 4 Who... 4 When... 4 What... 4 How...
Priority Pro v17: Hardware and Supporting Systems
Introduction Priority Pro v17: Hardware and Supporting Systems The following provides minimal system configuration requirements for Priority with respect to three types of installations: On-premise Priority
Analysis of VDI Storage Performance During Bootstorm
Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process
AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill
AFS Usage and Backups using TiBS at Fermilab Presented by Kevin Hill Agenda History and current usage of AFS at Fermilab About Teradactyl How TiBS (True Incremental Backup System) and TeraMerge works AFS
Backup Exec Infrastructure Manager 12.5 FAQ
Backup Exec Infrastructure Manager 12.5 FAQ Contents Overview... 1 Supported Backup Exec Configurations... 4 Backup Exec Infrastructure Manager Installation and Configuration... 6 Backup Exec Upgrades
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha
Red Hat Enterprise linux 5 Continuous Availability
Red Hat Enterprise linux 5 Continuous Availability Businesses continuity needs to be at the heart of any enterprise IT deployment. Even a modest disruption in service is costly in terms of lost revenue
This chapter explains how to update device drivers and apply hotfix.
MegaRAID SAS User's Guide Areas Covered Before Reading This Manual This section explains the notes for your safety and conventions used in this manual. Chapter 1 Overview This chapter explains an overview
Chapter 2: Getting Started
Chapter 2: Getting Started Once Partek Flow is installed, Chapter 2 will take the user to the next stage and describes the user interface and, of note, defines a number of terms required to understand
Considerations when Choosing a Backup System for AFS
Considerations when Choosing a Backup System for AFS By Kristen J. Webb President and CTO Teradactyl LLC. June 18, 2005 The Andrew File System has a proven track record as a scalable and secure network
Outline. Failure Types
Outline Database Management and Tuning Johann Gamper Free University of Bozen-Bolzano Faculty of Computer Science IDSE Unit 11 1 2 Conclusion Acknowledgements: The slides are provided by Nikolaus Augsten
High Availability of the Polarion Server
Polarion Software CONCEPT High Availability of the Polarion Server Installing Polarion in a high availability environment Europe, Middle-East, Africa: Polarion Software GmbH Hedelfinger Straße 60 70327
Chapter 2 Array Configuration [SATA Setup Utility] This chapter explains array configurations using this array controller.
Embedded MegaRAID SATA User's Guide Areas Covered Before Reading This Manual This section explains the notes for your safety and conventions used in this manual. Chapter 1 Overview This chapter introduces
MCTS Guide to Microsoft Windows 7. Chapter 10 Performance Tuning
MCTS Guide to Microsoft Windows 7 Chapter 10 Performance Tuning Objectives Identify several key performance enhancements Describe performance tuning concepts Use Performance Monitor Use Task Manager Understand
Aras Innovator 11. Platform Specifications
Document #: 11.0.02014120801 Last Modified: 12/30/2014 Copyright Information Copyright 2014 Aras Corporation. All Rights Reserved. Aras Corporation 300 Brickstone Square Suite 700 Andover, MA 01810 Phone:
IncidentMonitor Server Specification Datasheet
IncidentMonitor Server Specification Datasheet Prepared by Monitor 24-7 Inc October 1, 2015 Contact details: [email protected] North America: +1 416 410.2716 / +1 866 364.2757 Europe: +31 088 008.4600
Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server
Performance brief for IBM WebSphere Application Server.0 with VMware ESX.0 on HP ProLiant DL0 G server Table of contents Executive summary... WebSphere test configuration... Server information... WebSphere
DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering
DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Hardware and Software Requirements. Release 7.5.x PowerSchool Student Information System
Release 7.5.x PowerSchool Student Information System Released October 2012 Document Owner: Documentation Services This edition applies to Release 7.5.x of the PowerSchool software and to all subsequent
Remote Network Accelerator
Remote Network Accelerator Evaluation Guide LapLink Software 10210 NE Points Drive Kirkland, WA 98033 Tel: (425) 952-6000 www.laplink.com LapLink Remote Network Accelerator Evaluation Guide Page 1 of 19
REDCENTRIC INFRASTRUCTURE AS A SERVICE SERVICE DEFINITION
REDCENTRIC INFRASTRUCTURE AS A SERVICE SERVICE DEFINITION SD021 V2.2 Issue Date 01 July 2014 1) OVERVIEW Redcentric s Infrastructure as a Service (IaaS) enables the to consume server, storage and network
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE
QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QlikView Technical Brief April 2011 www.qlikview.com Introduction This technical brief covers an overview of the QlikView product components and architecture
MedInformatix System Requirements
MedInformatix System Requirements Acentec, Inc. A MedInformatix installation requires a workstation for each user who will access the system and a central server to store and process the data. A large
CQG/LAN Technical Specifications. January 3, 2011 Version 2011-01
CQG/LAN Technical Specifications January 3, 2011 Version 2011-01 Copyright 2011 CQG Inc. All rights reserved. Information in this document is subject to change without notice. Windows XP, Windows Vista,
Maximize System Performance
Maximize System Performance Webinar: Radiant Webinar Mini-Series for Implementation Technicians To connect to phone conference, please call: 1.800.375.2612, and then enter Participant Code: 397670 (or
Storage Architecture in XProtect Corporate
Milestone Systems White Paper Storage Architecture in XProtect Corporate Prepared by: John Rasmussen Product Manager XProtect Corporate Milestone Systems Date: 7 February, 2011 Page 1 of 20 Table of Contents
Parallels Cloud Server 6.0
Parallels Cloud Server 6.0 Parallels Cloud Storage I/O Benchmarking Guide September 05, 2014 Copyright 1999-2014 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings
