1 WHITE PAPER BASICS OF DISK I/O PERFORMANCE WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE This technical documentation is aimed at the persons responsible for the disk I/O performance of Fujitsu PRIMERGY servers. The document is intended to help understand disk I/O measuring methods and performance data in order to make suitable conclusions as to the customer-specific sizing and configuration of internal disk subsystems in PRIMERGY servers. Version Content Document history... 2 Performance indicators for disk subsystems... 3 Performance-relevant influencers... 3 Block sizes... 3 Parallel accesses to disk subsystems... 4 Operating system and applications... 5 Controllers... 5 Storage media... 6 Disk I/O performance measurements... 7 The Iometer measuring tool... 7 Benchmark environment... 8 Load profiles... 8 Measuring procedure... 9 Measurement results Analysis of disk subsystems Planning Analysis in case of performance problems Literature Contact Fujitsu Technology Solutions 2011 Page 1 (15)
2 Document history Version 1.0 Page 2 (15) Fujitsu Technology Solutions 2011
3 Performance indicators for disk subsystems As non-volatile storage media hard disk drives and solid state drives are both particularly safety-relevant and performance-critical components within the server environment. Since such an individual storage medium has in comparison with server components, such as the processor or main memory, a very high access time, particular importance is attached to the sizing and configuration of disk subsystems. On account of the plethora of different application scenarios there is a very large number of configuration options in particular for disk subsystems. It is therefore also not possible to assess all the aspects of a disk subsystem with a single performance indicator. The following are of particular interest data throughput the quantity of data that can be processed within a specific time interval number of requests (transactions) that can be processed within a specific time interval average response time the average time for the processing of a single request Performance-relevant influencers The various performance-relevant influencers can be put into five different topic areas: Block sizes Parallel accesses to disk subsystems Operating system and applications Controllers Storage media Block sizes Data transfer always takes place in blocks during access to a disk subsystem. The size of the transferred data blocks depends on features of the operating system and/or the application and cannot be influenced by the user. Influence of block size with random access Influence of block size with sequential access Throughput [MB/s] Transactions [IO/s] Latency [ms] Throughput [MB/s] Transactions [IO/s] Latency [ms] Block size [KB] Block size [KB] As the left diagrams above illustrates for random access, throughput and response time (latency) increase with increasing block size on an almost linear basis, while the number of transactions decreases on an almost linear basis. Generally, the theoretical maximum throughput of the disk subsystem is not achieved here. With sequential access, merely the response time increases on an almost linear basis with an increasing block size, but throughput does not. The theoretical maximum throughput of the disk subsystem is achieved here with a block size of 64 KB. Which throughput rates can therefore be achieved depends very much on the access pattern that an application generates on the disk subsystem. Fujitsu Technology Solutions 2011 Page 3 (15)
4 The following table shows examples of typical access patterns of various applications: Application Operating system File copy (SMB) File server (SMB) Mail server Database (transaction processing) Web server Database (log file) Backup Restore Video streaming Access pattern random, 40% read, 60% write, blocks 4 KB random, 50% read, 50% write, 64 KB blocks random, 67% read, 33% write, 64 KB blocks random, 67% read, 33% write, 8 KB random, 67% read, 33% write, 8 KB random, 100% write, 64 KB blocks sequential, 100% write, 64 KB blocks sequential, 100% write, 64 KB blocks sequential, 100% write, 64 KB blocks sequential, 100% read, blocks 64 KB Parallel accesses to disk subsystems Numerous clients normally access a server simultaneously. It is also possible for a client to place several requests with one server without waiting for responses in the meantime. Both can result in simultaneous accesses to a single controller or even to a single data medium. Therefore, most controllers and data media have queuing mechanisms, which when processing several parallel requests ensure under certain circumstances a higher performance than when processing fewer parallel or even only single requests. This, however, is always done at the price of higher response times. If many parallel requests only entail an increase in response time, and no longer in throughput, the disk subsystem is overloaded. Influence of outstanding I/Os with random access Influence of outstanding I/Os with sequential access Transactions [IO/s] Latency [ms] Throughput [MB/s] Latency [ms] # outstanding I/Os # outstanding I/Os The two diagrams above show by way of example the performance with random and sequential access, in each case with a constant block size and differing degrees of load from 1 to 64 outstanding I/Os. In the diagrams the number of transactions is congruent with the throughput, because on account of the constant block size their ratio - regardless of the number of outstanding I/Os - is always identical. In the random scenario (diagram on the left) throughput initially increases with an increasing number of outstanding I/Os, very quickly reaches its maximum and then virtually maintains this value. In the sequential scenario (diagram on the right) a throughput close to the maximum possible is always achieved regardless of the number of outstanding I/Os. With an increasing number of parallel requests the response times (latency) increase on a continuously linear basis in both the random and the sequential scenario. This is why optimization to short response times can safely take place in the sequential scenario through extending the disk subsystem, while in the random scenario it is necessary to keep an eye on throughput performance. Page 4 (15) Fujitsu Technology Solutions 2011
5 If faced with an application scenario in which the response behavior of the server cannot be disregarded, a choice must be made between optimizing the throughput and optimizing the response behavior. The server must then be sized and configured in such a way that it can manage the processing of parallel requests in accordance with individual requirements. Operating system and applications The manner in which applications access the mass storage system considerably influences the performance of a disk subsystem. The operating system used, the possible use of a virtualization layer, I/O scheduling technology, the file system, the file caches and the organization of the data media, for example through partitioning or software RAID, all play a role here. Controllers As far as throughput performance is concerned, the selection of the controller plays an important role if the data media are not organized via software RAID. In addition to controllers integrated on the system board, different plug-in controllers can be used to connect internal or external data media. It should be noted that controllers are designed for the connection of a limited number of data media. If this number is exceeded, the controller becomes a performance-limiting factor. RAID and JBOD Hard disks are among the most error-prone components of a computer system. This is why RAID controllers are used in server systems in order to prevent data loss through hard disk failure. Here several hard disks are put together to form a "Redundant Array of Independent Disks", known as RAID in short. The data is spread over several hard disks in such a way that all the data is retained even if one hard disk fails. JBOD (just a bunch of disks) and RAID 0 are the exceptions to this. In these organizational forms only data media are put together, and no redundancy is created. The most usual methods of organizing hard disks in arrays are JBOD, RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 50 and RAID 60. The selected organizational form and the number of combined data media also influence the performance of the disk subsystem to a great extent. LUN LUN stands for "Logical Unit Number" and was originally used to allocate SCSI hard disks. From the viewpoint of an operating system the term commonly refers to a virtual hard disk. Such a virtual hard disk can be identical to a physical hard disk or it can be a hard disk array (JBOD or RAID). Stripe size In a RAID array the data is distributed over the appropriate storage media as so-called chunks, i.e. in pieces. A stripe set is made up of one chunk for each data medium of an array. The stripe size specifies the size of a stripe set without any parity chunks. This size, which must be specified when the RAID array is created, influences both the throughput and the response time. Cache A number of controllers have caches, with which they can influence throughput in three for the most part separately adjustable ways when using data media: through the caching of write data. The immediate response to the user for this data is completed, although in reality it does not even exist on the data medium yet. The actual write process takes place en bloc later. This procedure enables optimal utilization of controller resources, faster succession of the write requests and therefore higher throughput. Any power failures can be bridged by an optional BBU so as to ensure data integrity. through the caching of read data in application scenarios with purely sequential read accesses, and for some controllers also with pro rata sequential read accesses only. through the setting-up of request queues. By sorting the request sequence this enables the controller to optimize the movements of the write/read heads of a hard disk. Prerequisite, however, is that the controller is supplied with sufficient requests so that a queue can form. Fujitsu Technology Solutions 2011 Page 5 (15)
6 Storage media First of all, the type of storage medium in itself is of great importance as regards performance. As rotating magnetic storage media, hard disks have completely different characters than solid state drives, which based on semiconductor memories have a substantially higher performance. Solid state drives have a performance that is several times greater than that of hard disks, but have a shorter lifespan and are considerably more expensive. In contrast to hard disks, there is a performance difference in SSDs between the writing of an empty memory cell and the overwriting of old memory content, because the latter must first be deleted before overwriting. The result of this is that the writing speed for SSDs can decrease rapidly as the filling level increases. In general, however, an SSD is also superior to a hard disk with regard to performance. The transfer protocol and the cache generally play an important role. Maximum transfer speed of the hard disk interface: SATA 3.0 GBit/s 286 MB/s effective throughput in each direction SAS: 286 MB/s effective throughput in each direction SAS II: 572 MB/s effective throughput in each direction Cache The influence of the cache on performance is caused by two factors: through the setting-up of request queues. By sorting the request sequence this enables hard disk logic to optimize the movements of the write/read head. Prerequisite for this, however, is that the hard disk cache is enabled. The hard disk must also be supplied with a sufficient number of requests so that a queue can form. through the caching of data: it is normally not only the requested sector that is read during a read request, but other sectors that are on the same track. These are buffered in the cache just in case. It is also well worth temporarily storing write requests in the hard disk cache, because their processing is generally not so time-critical as that of read requests that the requesting application is waiting for. This also applies analogously for solid state drives. Both rotational speed and - if the data area size is specified - the capacity also influence the performance of hard disks: Rotational speed: The higher the rotational speed, the greater the access rate of the read/write head. There are SATA hard disks with rotational speeds of 5400 rpm and 7200 rpm. SAS hard disks have higher rotational speeds of rpm or rpm. Capacity: Hard disks have both a constant number of revolutions per minute and a constant data density. This means that the data quantity per track increases as it moves from the inside outwards and thus the access rates at the outermost edge of the hard disk are also the highest. Since a data quantity of a set size positions itself further towards the outside on hard disks with a high capacity, the capacity has a considerable influence on performance. Page 6 (15) Fujitsu Technology Solutions 2011
7 Disk I/O performance measurements At Fujitsu disk I/O performance measurements are performed for all PRIMERGY servers in the PRIMERGY Performance Lab. In contrast to the application benchmarks, it is in general not the performance of an entire server including disk subsystem that is tested, but merely the performance of the disk subsystem, in other words the storage media in themselves as well as their controllers. In so doing, the disk subsystem is sized in such a way that server components, such as processors or the main memory, do not form a bottleneck during the measurements. While it is quite possible to measure the maximum throughput performance of an entire server configuration with an adequately large disk subsystem. This is however not the actual goal of the disk I/O performance measurements described here. Measurement results are documented in the performance reports of the PRIMERGY servers, which are available at The Iometer measuring tool The Iometer measuring tool, which was originally developed by the company Intel, is used in the disk I/O performance measurements that are performed in the PRIMERGY Performance Lab. Since the end of 2001 Iometer has been a project at and is ported to various platforms and enhanced by a group of international developers. Iometer consists of a user interface for Windows systems and the socalled dynamo which is available for various platforms. For some years now it has been possible to download these two components under Intel Open Source License from or Iometer enables you to reproduce the behavior of real applications as far as accesses to disk subsystems are concerned and offers a number of parameters for this purpose. First of all a data area is defined that is accessed during the measurements. The following Iometer parameters are used to create the data area: Maximum Disk Size Starting Disk Size And the following Iometer parameters are used to define a detailed access scenario: # of Worker Threads # of Outstanding I/Os Test Connection Rate Transfer Request Size (block size) Percent of Access Specification Percent Read/Write Distribution Percent Random/Sequential Distribution Transfer Delay Burst Length Align I/Os Reply Size In this way, the most varied application scenarios can be reproduced - be it with sequential read or write, random read or write and also combinations thereof when using different block sizes as well as a variably adjustable number of simultaneous accesses. As a result Iometer provides a text file with comma separated values (.csv) for the respective access pattern. Key indicators: Throughput per second Transactions per second Average response time This method permits the performance of various disk subsystems with certain access patterns to be compared. Iometer is in a position to access not only disk subsystems with a file system, but also disk subsystems without a file system, so-called raw devices. In both cases the file cache of the operating systems remains disregarded and operation is in blocks on a single test file. As standard the PRIMERGY Performance Lab uses the Windows version of the Iometer "dynamo" with a defined data area and a record of load profiles that occur in practice as well as a defined measuring scenario in disk I/O performance measurements. These definitions form the basis for the reproducibility of measurement results and allow an objective comparison of the performance of various disk subsystems. Fujitsu Technology Solutions 2011 Page 7 (15)
8 Benchmark environment The disk I/O performance measurements on PRIMERGY servers are made with their internal disk subsystem, and with blade servers also with a storage blade. Before measurements on RAID arrays, a foreground initialization first takes place on the RAID array. As standard, the measurements are carried out under Windows Server 2008 Enterprise Edition. The mass storage system is formatted with the file system NTFS without quick formatting and with deactivated compression, even if higher performance could be achieved with other file systems or raw devices. The parameter "Index this drive for faster searching" is deactivated for the measurement drive. The number of measurement files corresponds to the number of virtual hard disks. Thus, operation is generally performed on a single measurement file. The size of a measurement file depends on the number of data media in the RAID array, but does not depend on their capacity: Number of data media Size of the measurement file GB GB GB Load profiles As standard, a series of load profiles, which are characterized by the following mass storage accesses, is used for disk I/O performance measurements: Access Type of access Transfer Request Size [KB] read write (block size) # of Outstanding I/Os sequential 100% 0% 1, 4, 8, 64, 128, 512, , 3, 8, 16, 32, 64, 128, 256, 512 sequential 0% 100% 1, 4, 8, 64, 128, 512, , 3, 8, 16, 32, 64, 128, 256, 512 random 100% 0% 1, 4, 8, 64, 256, , 3, 8, 16, 32, 64, 128, 256, 512 random 0% 100% 1, 4, 8, 64, 256, , 3, 8, 16, 32, 64, 128, 256, 512 random 67% 33% 1, 4, 8, 16, 32, 64, 128 1, 3, 8, 16, 32, 64, 128, 256, 512 random 50% 50% 64 1, 3, 8, 16, 32, 64, 128, 256, 512 The following standard settings also apply in measurements with 1 controller for all load profiles: # of Worker Threads = 1 Test Connection Rate = off Transfer Delay = 0 Burst Length = 1 Align I/Os = Sector Boundaries Reply Size = No Reply Page 8 (15) Fujitsu Technology Solutions 2011
9 Some of these load profiles can be assigned with typical applications: Standard load profile Access Type of access Block size read write [KB] Application File copy random 50% 50% 64 Copying of files File server random 67% 33% 64 File server Database random 67% 33% 8 Streaming sequential 100% 0% 64 Database (data transfer) Mail server Database (log file), Data backup; Video streaming (partial) Restore sequential 0% 100% 64 Restoring of files Measuring procedure A measuring run of 40 seconds' duration is started for every defined access pattern. During the first 10 seconds (ramp-up phase) no measurement data is collected. This is only done in the following so-called steady-state phase of 30 seconds' duration. The measuring procedure is schematically shown in the following diagram. Measuring phases: A = Ramp up (10 s) B = Steady state (30 s) A B A B A B. A B Fujitsu Technology Solutions 2011 Page 9 (15)
10 Measurement results For each load profile Iometer provides various key indicators. The following are of particular interest Throughput [MB/s] Throughput in megabytes per second Transactions [IO/s] Transaction rate in I/O operations per second Latency [ms] Average response time in ms Throughput and transaction rate are directly proportional to each other and can be transferred to each other according to the formula Data throughput [MB/s] Transaction rate [IO/s] = Transaction rate [IO/s] Block size [MB] = Data throughput [MB/s] / Block size [MB] The throughput rate has established itself as the normal measurement variable for sequential load profiles, whereas the measurement variable "transaction rate" is mostly used for random load profiles with their small block sizes. Independent of the respective load profile, the average response time is also of interest. Its level depends on the transaction rate and parallelism when executing the transactions. It can be calculated according to the formula Average latency time [ms] = 10 3 Number of worker threads parallel IOs / Transaction rate [IO/s] Page 10 (15) Fujitsu Technology Solutions 2011
11 Analysis of disk subsystems Planning The variety of factors that - to some extent seriously - influence the throughput performance of a disk subsystem calls for detailed information about the field of application for which a disk subsystem is to be sized and configured. The following questions are of key significance here: Which access pattern exists? How high are the necessary transaction rates of the application? What capacity (GB) is needed? How large is the timeslot for the backup? Can the data quantity be fully backed up during this timeslot? How long at most may the amount of data not be available during a restore? (It should be noted here that this may include the restore from a backup medium as well as the replay of the transaction logs.) Practical experience has shown that the following rules of thumb should be observed when designing a disk subsystem so as to avoid any subsequent performance problems: Data with a clearly different access pattern should be placed on different RAID arrays. For example, the sequential accesses for transaction logs together with the random accesses of a database would on one and the same drive result in performance problems. The storing of several databases on one array is less critical, as long as the necessary transaction rates can be provided. Inadequate transaction rates are frequently the reason for bottlenecks in large database systems. To increase the number of possible I/Os per second you should use several data media with low capacity rather than a few media with high capacity. The RAID controllers should be optimally set. Help is provided in this regard by the utility program supplied for current servers "ServerView RAID" with its predefined "Performance" mode, which can be used instead of the "Data Protection" mode that is set as standard. Whereas these two configuration versions can directly attune all the possible parameters to each other in an optimal way, the various settings can also be made on an individual basis. If you use the cache of a RAID controller, a battery backup unit (BBU) should be used to protect against loss of data in case of a power failure. If possible, the write caches of the data media should be activated. However, the use of an uninterruptible power supply (UPS) to protect against loss of data in case of a power failure is a prerequisite. Fujitsu Technology Solutions 2011 Page 11 (15)
12 Analysis in case of performance problems Detailed information is also necessary if the performance of a disk subsystem that is in use is analyzed in order to be able to show where appropriate any opportunities for optimization. In a comparison of different configurations server components that are not a part of the disk subsystem can also be of interest. For example, different configurations as far as processor, memory, etc. are concerned, can also be a cause for inadequate load generation. Server hardware Server CPU Number of CPUs Memory Memory capacity PCI controller Server software Hypervisor (if used) Operating system Partitions, volumes SW RAID File system Operating-system-specific parameter settings Application Storage hardware per controller: Controller type BBU Cache size Cache settings per RAID array: RAID level Number of drives Stripe size per drive: Drive type Cache setting Page 12 (15) Fujitsu Technology Solutions 2011
13 Tools In addition to Iometer, there are a number of different aids for the analysis of storage systems performance. Here is a brief summary of commonly used tools: LINUX sar for collecting, evaluating or the saving of information on system activity strace for logging system calls and signals Windows Performance-Monitor for recording and evaluating the performance counters integrated in Windows systems Process Monitor (from for displaying and analyzing file system activities External disk subsystem: There are tools for analyzing I/O behavior for some external disk subsystems. A detailed description of the tools is omitted here, as this would go beyond the scope of this paper. Before using these tools, you should familiarize yourself with their use with the help of the online help or manuals. Tips If there is any reason to suspect that the disk subsystem is the cause for inadequate performance, you should understand the I/O behavior of the application involved in order to be able to correctly interpret the performance counters available for the analysis (e.g. from the Performance-Monitor of Windows). When talking about an application in the server environment, this usually does not mean a program that is visible to the end user, but e.g. a file, web, SQL or Exchange server. You should also be aware of the fact that optimization strategies, which do not necessarily let the whole system provide an optimal performance in every situation, can be implemented in every software layer (e.g. a file system with its caching mechanisms, a volume manager or an I/O driver) between the application and the disk subsystem. Factors that influence throughput do not have a constant impact in a real environment, but vary - both with regard to time and to the LUNs used. How you can derive benefit for the analysis of performance problems from the performance counters of the tools, is shown here using a number of examples: Ratio of read to write requests In order to obtain the ratio of read to write requests, the I/Os taking place on the logical drives concerned are determined by means of the tools provided by the operating system (e.g. Performance-Monitor with Windows or "strace" with LINUX) or by the storage system. This information can be used together with the knowledge about the application to see whether the storage system behaves as expected. If write-intensive accesses to a volume are recorded e.g. on a file server that is chiefly used for data retrieval, this makes it clear that further analyses are necessary there. Block size of the transactions performed The number of read/write requests of certain block sizes can be used to reveal possible performance problems. If for example the application used works with a block size of 16 KB, then a large number of requests of this size is to be expected. If this is not the case, then a volume manager or an I/O driver will possibly arrange for the requests to be merged or split. It should be noted in such an analysis that e.g. the average values supplied by the Windows Performance-Monitor ("Avg. Disk Bytes/Read") do not reflect an exact distribution of the block size. On the other hand it is possible with the "Process Monitor" to log the requests transmitted by the application to the file system, but without being able to observe the requests that ultimately arise directly at the interface to the disk subsystem. An analysis tool on the external disk subsystem can also offer further analysis options. Locality of the accesses If the accesses to a data are not spread over the entire data stock, but take place to some extent within a specific area, this is also known as locality of the accesses. This information can be derived from any possibly existing statistics about the cache: high hit rates in the cache indicate that at least part of the accesses takes place within a specific data area. If you have the opportunity - with "Process Monitor" or "strace" - to find out about the areas used in the processed files, you can together with the numbers of read or written bytes get an idea as to whether the accesses within e.g. an 80 GB file are distributed over the entire 80 GB on a fully random basis or whether only a few gigabytes are processed during certain phases. Only if the latter is the case, does the activation of Fujitsu Technology Solutions 2011 Page 13 (15)
14 any existing read caches make sense, as the cache of the controller would otherwise be searched for data in vain. Number of simultaneous requests on a logical drive The requests queuing on a logical drive ("Avg. Disk Queue Length") as well as in certain circumstances the utilization of the volume ("% Disk Time") are a measure for the intensity of the I/Os. By extending a logical drive to include additional data media it is possible to influence the parallelism on the individual data media and thus optimize transaction rates and response times. However, this usually requires a full backup and restore of the databases concerned. Response times The response times ("Avg. Disk sec/transfer") of a logical drive specify how the storage system reacts to the applied load. It should be noted here that the response time does not only depend on the number and block size of the requests, but also on whether they are read or write requests and on the mix ratio between them. Distribution of the I/Os over the logical drives and over time If a disk subsystem works under a high load, care should be taken to ensure that the I/O intensity is as equal as possible over all the LUNs. In practice, however, this is not that easy to achieve, because both I/O intensity and the distribution over the LUNs varies over time. For example, the loads at the end of the month, quarter or year are completely different to the normal I/O loads that arise as a result of day-to-day work. The regular backup cycles or large-scale database queries can also result in bottleneck situations. The logging-on of users or their break times can also influence I/O intensity and its distribution over the volumes. If you analyze the I/O loads of the logical drives and at the same time determine the I/O load at critical times drives with the highest load drives with low loads, this information can help you when deciding e.g. to move data from one logical drive to another so as to achieve a more evenly balanced volume utilization. It may also be possible to eliminate a number of bottlenecks through the postponement of regular activities, e.g. backup. However, after all changes of this kind it is necessary to continue to observe the storage system in order to ensure that the elimination of a bottleneck in one position does not result in a new bottleneck in another. Optimal RAID level and number of disks In transaction-intensive applications, e.g. databases, it can make sense in the event of bottlenecks to change to a different RAID level or even to a larger number of data media. Such changes mostly require a full backup and restore of the saved databases. Current RAID controllers provide the option of expanding RAID arrays online ("Online capacity expansion"). However, it must be taken into consideration here that this expansion is very time-consuming, because all the already saved data must be reorganized, i.e. re-distributed over the expanded RAID array. The analyses listed above can of course also lead to the result that the storage system itself is not the reason for inadequate performance, but that the reasons are to be found in the application itself or, even more so, in the way the application uses the storage system. Page 14 (15) Fujitsu Technology Solutions 2011
15 Literature PRIMERGY Systems PRIMERGY Performance Information about Iometer Contact FUJITSU Technology Solutions Website: PRIMERGY Product Marketing PRIMERGY Performance and Benchmarks All rights reserved, including industrial property rights. Delivery subject to availability; right of technical modifications reserved. No liability or warranty assumed for completeness, validity and accuracy of the specified data and illustrations. Designations may be trademarks and/or copyrights of the respective manufacturer, the use of which by third parties for their own purposes may infringe the rights of such owners. Further details are available under WW EN Copyright Fujitsu Technology Solutions GmbH 2011 Fujitsu Technology Solutions 2011 Page 15 (15)
Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers
White Paper RAID Controller Performance 213 White Paper Fujitsu PRIMERGY Servers RAID Controller Performance 213 This technical documentation is aimed at the persons responsible for the disk I/O performance
Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,
WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower
Using Synology SSD Technology to Enhance System Performance Based on DSM 5.2 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD Cache as Solution...
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...
Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial
W H I T E P A P E R OPTIMIZING VIRTUAL TAPE PERFORMANCE: IMPROVING EFFICIENCY WITH DISK STORAGE SYSTEMS By: David J. Cuddihy Principal Engineer Embedded Software Group June, 2007 155 CrossPoint Parkway
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Sistemas Operativos: Input/Output Disks Pedro F. Souto (firstname.lastname@example.org) April 28, 2012 Topics Magnetic Disks RAID Solid State Disks Topics Magnetic Disks RAID Solid State Disks Magnetic Disk Construction
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD
Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.
Windows 8 SMB 2.2 File Sharing Performance Abstract This paper provides a preliminary analysis of the performance capabilities of the Server Message Block (SMB) 2.2 file sharing protocol with 10 gigabit
DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top
White Paper RAID Controller Performance 216 White Paper FUJITSU Server PRIMERGY & PRIMEQUEST RAID Controller Performance 216 This technical documentation is aimed at the persons responsible for the disk
Open-E NAS Enterprise and Microsoft Windows Storage Server 2003 competitive performance comparison White Paper 2006 Copyright 2006 Open-E www.open-e.com 2006 Open-E GmbH. All rights reserved. Open-E and
Comparison of Drive Technologies for High-Transaction Databases August 2007 By Solid Data Systems, Inc. Contents: Abstract 1 Comparison of Drive Technologies 1 2 When Speed Counts 3 Appendix 4 5 ABSTRACT
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly
Virtualized SQL Server Database Performance Study TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Factors Affecting Storage vmotion... 3 Workload... 4 Experiments and Results...
IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark
Qsan Document - White Paper Performance Monitor Case Studies Version 1.0 November 2014 Copyright Copyright@2004~2014, Qsan Technology, Inc. All rights reserved. No part of this document may be reproduced
Dell Migration Manager for Email Archives 7.3 SQL Best Practices 2016 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. Dell and
Optimizing LTO Backup Performance July 19, 2011 Written by: Ash McCarty Contributors: Cedrick Burton Bob Dawson Vang Nguyen Richard Snook Table of Contents 1.0 Introduction... 3 2.0 Host System Configuration...
Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array
Fusionstor NAS Enterprise Server and Microsoft Windows Storage Server 2003 competitive performance comparison This white paper compares two important NAS operating systems and examines their performance.
High Performance Tier Implementation Guideline A Dell Technical White Paper PowerVault MD32 and MD32i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
Optimizing SQL Server Storage Performance with the PowerEdge R720 Choosing the best storage solution for optimal database performance Luis Acosta Solutions Performance Analysis Group Joe Noyola Advanced
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
RAID OPTION ROM USER MANUAL Version 1.6 RAID Option ROM User Manual Copyright 2008 Advanced Micro Devices, Inc. All Rights Reserved. Copyright by Advanced Micro Devices, Inc. (AMD). No part of this manual
StreamServe Persuasion SP5 Microsoft SQL Server Database Guidelines Rev A StreamServe Persuasion SP5 Microsoft SQL Server Database Guidelines Rev A 2001-2011 STREAMSERVE, INC. ALL RIGHTS RESERVED United
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
N8103-149/150/151/160 RAID Controller N8103-156 MegaRAID CacheCade Feature Overview April 2012 Rev.1.0 NEC Corporation Contents 1 Introduction... 3 2 Types of RAID Controllers... 3 3 New Features of RAID
Understanding Flash SSD Performance Douglas Dumitru CTO EasyCo LLC August 16, 2007 DRAFT Flash based Solid State Drives are quickly becoming popular in a wide variety of applications. Most people think
IOmark- VDI HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark, and IOmark
Intel RAID Controllers Best Practices White Paper April, 2008 Enterprise Platforms and Services Division - Marketing Revision History Date Revision Number April, 2008 1.0 Initial release. Modifications
Understanding endurance and performance characteristics of HP solid state drives Technology brief Introduction... 2 SSD endurance... 2 An introduction to endurance... 2 NAND organization... 2 SLC versus
www.storage-spaces-recovery.com How to recover a failed Storage Spaces ReclaiMe Storage Spaces Recovery User Manual 2013 www.storage-spaces-recovery.com Contents Overview... 4 Storage Spaces concepts and
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009
SQL Server Setup Guide for BusinessObjects Planning BusinessObjects Planning XI Release 2 Copyright 2007 Business Objects. All rights reserved. Business Objects owns the following U.S. patents, which may
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
WHITE PAPER Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com SAS 9 Preferred Implementation Partner tests a single Fusion
1 of 22 5/1/2011 5:34 PM Storage and SQL Server capacity planning and configuration (SharePoint Server 2010) Updated: January 20, 2011 This article describes how to plan for and configure the storage and
Achieving a Million I/O Operations per Second from a Single VMware vsphere 5.0 Host Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Executive Summary... 3 Software and Hardware...
Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products
WHITE PAPER Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage Brocade continues to innovate by delivering the industry s first 16 Gbps switches for low latency and high transaction
Outline Database Management and Tuning Hardware Tuning Johann Gamper 1 Free University of Bozen-Bolzano Faculty of Computer Science IDSE Unit 12 2 3 Conclusion Acknowledgements: The slides are provided
Using Iometer to Show Acceleration Benefits for VMware vsphere 5.5 with FlashSoft Software 3.7 WHITE PAPER Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table
Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology Evaluation report prepared under contract with NetApp Introduction As flash storage options proliferate and become accepted in the enterprise,
MegaRAID SAS User's Guide Areas Covered Before Reading This Manual This section explains the notes for your safety and conventions used in this manual. Chapter 1 Overview This chapter explains an overview
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
Best Practices for Deploying Citrix XenDesktop on NexentaStor Open Storage White Paper July, 2011 Deploying Citrix XenDesktop on NexentaStor Open Storage Table of Contents The Challenges of VDI Storage
Accelerate SQL Server 2014 AlwaysOn Availability Groups with Seagate Nytro Flash Accelerator Cards Technology Paper Authored by: Mark Pokorny, Database Engineer, Seagate Overview SQL Server 2014 provides
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
RAID Performance Analysis We have six 500 GB disks with 8 ms average seek time. They rotate at 7200 RPM and have a transfer rate of 20 MB/sec. The minimum unit of transfer to each disk is a 512 byte sector.
WHITE PAPER Guide to 50% Faster VMs No Hardware Required WP_v03_20140618 Visit us at Condusiv.com GUIDE TO 50% FASTER VMS NO HARDWARE REQUIRED 2 Executive Summary As much as everyone has bought into the
Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation report prepared under contract with Lenovo Executive Summary Love it or hate it, businesses rely on email. It
Best Practices for SSD Performance Measurement Doug Rollins, Senior Strategic Applications Engineer Micron Technology, Inc. Technical Marketing Brief SSDs have pronounced write-history sensitivity, which
Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have
Choosing and Architecting Storage for Your Environment Lucas Nguyen Technical Alliance Manager Mike DiPetrillo Specialist Systems Engineer Agenda VMware Storage Options Fibre Channel NAS iscsi DAS Architecture
PowerVault MD3 SSD Cache Overview White Paper 2012 Dell Inc. All Rights Reserved. PowerVault is a trademark of Dell Inc. 2 Dell PowerVault MD3 SSD Cache Overview Table of contents 1 Overview... 4 2 Architecture...
Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance
Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads Web server, SQL Server OLTP, Exchange Jetstress, and SharePoint Workloads Can Run Simultaneously on One Violin Memory
Executive Summary This document describes test procedures for Diskeeper 2011 evaluation. Specifically, the tests described in this document are intended to achieve the following objectives: 1. Evaluate
LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE Matt Kixmoeller, Pure Storage SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies
Intel RAID Controllers RS2BL8 and RS2PI8 Performance Brief Revision 1. 1 Jul 29 1 Legal Disclaimers Performance tests and ratings are measured using specific computer systems and/or components and reflect
Parallels VDI Solution White Paper VDI SIZING A Competitive Comparison of VDI Solution Sizing between Parallels VDI versus VMware VDI www.parallels.com Parallels VDI Sizing. 29 Table of Contents Overview...
Maximizing VMware ESX Performance Through Defragmentation of Guest Systems Presented by July, 2010 Table of Contents EXECUTIVE OVERVIEW 3 TEST EQUIPMENT AND METHODS 4 TESTING OVERVIEW 5 Fragmentation in
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend