Performance Tuning & Sizing Guide for SAS Users and Sun System Administrators Updated Edition

Size: px
Start display at page:

Download "Performance Tuning & Sizing Guide for SAS Users and Sun System Administrators Updated Edition"

Transcription

1 Performance Tuning & Sizing Guide for SAS Users and Sun System Administrators Updated Edition SAS User: "I don't want to be a UNIX guru to get good performance." UNIX Sys Admin: "I don't want to be a SAS guru to provide good performance." by William Kearns Senior Systems Engineer Sun Microsystems Inc. william.kearns@sun.com and Tom Keefer Sun Technical Alliance Manager SAS Institute tom.keefer@sas.com Written for SUGI 28 Seattle, WA March 28, 2003

2 Index 1. Foreword 2. Where to Start 3. Baseline Performance 4. System Architecture 5. Tools (Gathering the Data) 6. System I/O 7. Memory 8. CPU 9. Application Coding 10. Tuning in Multi-user Environments 11. Multi-tier applications 12. Summary Appendices: A. Sizing Sheet B. FULLSTIMER Details C. System Monitoring and Data Gathering Tools D. Sun Fire 6800 I/O Scalability Tests E. Sun Fire 280R System Throughput Diagram F. References and Resource Link G. The People Behind The Effort 2 of 40

3 1. Foreword The focus of this whitepaper is to provide some basic understanding of how to analyze and apply tuning changes to SAS software running on the Sun Solaris and Sun UltraSPARC hardware platform. It is an updated edition of a paper written back in 1999 by Sun and SAS staff members. This paper provides updates based on Sun Solaris versions 8 and 9 along with Base SAS versions 8.2, 9.0 and 9.1 (to be released later this year). This paper will give you a feel for how to plan for a new installation of SAS running on Sun. For those of you with existing installs it will provide tools and methods to help identify issues and how to address them. 2. Where to Start In order to predict or increase performance you will need to gather a good bit of information about your SAS application(s) and how they will be used. Here is a list of the type of things you will need to know before you get started: What SAS products and versions are you going to use? What hardware will it be running on? What operating system(s) are you going to be running? How many users will be accessing the system? How many SAS jobs and types will be running at the same time? How often and how long will the users be accessing the system? Where and in what format will your data be stored (Datasets, Database)? Is your data stored on a different system than your SAS application? What is the network architecture between the users and server(s)? What type of queries and how many will each of your users make? Are other applications going to be running on the box besides SAS? This is a lot of information to gather. Unfortunately if you don t have a good bit of this information it will be difficult to properly size and/or tune your system. Included in Appendix A of this paper is a sample sizing document you can use to collect answers to these and other questions that will be useful for sizing or tuning your system. 3. Baseline Performance For tuning or sizing purposes it is critical to understand the performance characteristics of the most common tasks you will be asking of the system. These include those larger more intensive tasks that seem to take over all of the system resources. Hmmmm every time Maureen logs on to the system everything seems to crawl. Sometimes just finding these 3 of 40

4 jobs can be difficult. By using some of the basic system monitoring tools we will be covering later, you will be able to seek out these tasks. For new projects, capturing baseline performance is not usually an option. You will need to rely on your experience and favorite SAS & Sun technical contacts to provide references and/or sizing estimates for the type of work you will be doing. This is why it is critical to gather as much of the sizing information as possible before settling on a system configuration. It will help prevent you from under sizing or incorrectly configuring the system for your application(s). To capture the baseline performance it is important to isolate these jobs as much as possible, and optimally to run them on a controlled test system. It is important to understand standalone performance before you can even begin to tackle a multi-user system. Key concepts to capturing standalone SAS application / job performance: Ensure baseline tests accurately represent your production code / problem Match system configuration to production as much as possible (hardware layout, O/S, etc.) Run jobs on quiet system (no other users / jobs) Capture SAS execution statistics (FULLSTIMER option Appendix B) Capture System statistics during runs (I/O, CPU, RAM, Network, Database, etc. Appendix C) Understand baseline system performance to get an understanding of potential system throughput. Know your hardware limitations and bottlenecks. This will be useful later as you move into multi-user testing and for helping understand production performance. 4. System Architecture One common error when doing system sizing or tuning is to ignore the system architecture. Your application will only perform as well as the system you are running on. A bottleneck at the system level can have drastic performance effects. You can only go as fast as your weakest link. Figure A is a simplistic system diagram that shows the relationship between memory, CPU, Storage and the network. A more complex diagram of an actual Sun Fire 280R is included in Appendix E. The speed of the system backplane is Memory Storage System Backplane CPU Network Figure A Hardware Architecture Diagram 4 of 40

5 the most critical component. It alone determines how fast you can move data back and forth through the system. A super highway is a good way to describe the backplane of a computer. It connects all the smaller streets and buildings to everything else in the city. It is also the roadway where you can drive the fastest as you head towards your destination. Backplanes in server systems like the Sun Fire 6800 can maintain sustained data rates at 9.6 Gigabytes per second where small desktop PC s can only sustain data rates much less than 1 Gigabyte per second. This is the difference between a system designed for multi-user enterprise class applications and one designed for a single user running desktop applications like a web browser, spreadsheet or word processor. Be sure to design your system with all of these components in mind. If you can t move data out of storage or the network fast enough to feed the backplane and ultimately the CPU, you won t be able to get work done. It doesn t matter how fast your CPUs are, if you can t feed them with data, they will sit idle. A very common issue we run into when analyzing performance problems is where we find a very large system with many CPUs, lots of memory but an underperforming I/O subsystem. Example System: Qty MHz CPUs 9.6 GB/s backplane 96 GB of RAM Qty 9 73 GB disk drives via Fiber Channel (657 GB of total storage) In this configuration, the user has a large amount of memory capacity, half a Terabyte of storage and a significant amount of CPU horsepower. The good news is that data can be moved across the backplane between memory and CPU at incredible speeds (note the backplane speed of 9.6 GB/s). However, the maximum sustained speed of 100 MB/s Fiber channel controllers is about 70 MB/s. If you have an I/O intensive application (which SAS typically is) your system will spend a lot of time waiting for the disks to feed information up to the server at the maximum sustained speed of about 70 MB/s. A single fiber channel connection is considerably slower than the systems 9.6 GB/s backplane. The overall performance of this system could easily be enhanced by adding more fiber channel connections and disks. Be sure to balance disks across the channels for optimal performance. It has been great news for customers that disk drive sizes are now available in capacities well over 100 GB per disk spindle. This is great on the pocketbook and helps reduce datacenter space requirements, but it s terrible for system performance since most people now buy less spindles. Disk size has increased, but individual disk performance has not kept up. It s a shame you can t buy a home PC with 6 disk drives. Even an old 300 MHz Pentium PC would run faster if he didn t have to always wait for the single disk drive to respond to an I/O request. More spindles and controllers means faster application response time. 5 of 40

6 A good rule of thumb for I/O intensive servers is to add at least one I/O controller (preferably fiber or high speed SCSI) for every 1-2 CPUs you add to the system. You should also have at minimum of 6-12 disk drives per I/O channel to keep the pipe full. There will be more on I/O later in this document. 5. Tools (Gathering the Data) There are many different tools you can use to collect system statistics and monitor SAS application performance. But the most important tool to use is a built in feature of SAS, the FULLSTIMER option. This option allows you to capture run time at a PROC by PROC and over all job execution level (see the FULLSTIMER section in Appendix B for details on how to turn it on and diagnose results). By turning on FULLSTIMER you can get a better feel for where your jobs and applications spend most of their time and it can give you clues on where to focus your tuning efforts. It is critical to save the baseline results from FULLSTIMER for later comparison to your production runs, especially those in a multi-user environment. Besides monitoring SAS jobs, it's critical to gather system data and monitor its behavior during your baseline and production runs. Here is a list of things you should collect (details of how and what commands to use and sample outputs can be found in Appendix C): Configuration Data: System Memory Number of CPU s and Speed File System Layouts, Disk Sizes and Storage Subsystem Configuration Operating System Versions and Patch Levels SAS Versions and Patch Levels System Monitoring Tools: sar (UNIX system account records) iostat (I/O subsystem monitoring tool) prstat and ps (process monitoring tools) swap (monitor memory allocation) vmstat truss 6. System I/O In the author s opinion, I/O can be the most important component of the SAS System performance. The good news is there are only a few simple concepts to keep in mind. Conceptually, there are 2 distinct I/O areas of concern to SAS applications. 6 of 40

7 There are SAS data areas and WORK or scratch areas. Data areas are specified programmatically by the SAS System libname directive. Thus, they could be any writable directory on UNIX platforms including network attached file systems like NFS. Any data in the WORK area is removed when the SAS application terminates properly. Jobs that terminate abnormally could possibly leave large temporary directories in your SAS work space. It should be obvious that I/O configurations are highly dependent on site specific, user specific and application specific factors. Thus, there is no one set of I/O configuration guidelines that fits all situations. However, here are a few general guidelines: General Guidelines Spread the load out over as many spindles, controllers and paths back to the host as possible. Separate your data and WORK areas; the default WORK area is /usr/tmp or /tmp and should be modified to point to a separate logical volume which has been configured for heavy, write intensive activity. If all system users are using the same WORK area, this could easily become a system bottleneck. While it is conceptually very easy to set up separate WORK areas which may correspond to different I/O channels, educating users to take advantage or change their configurations may prove more difficult than one might think. If the plan is to use 1 large WORK area for all users, maximize the I/O paths. This may involve striping to different storage cabinets if necessary. Since the WORK area is temporary, and typically doesn't need to be backed up, configure appropriately. When sizing the WORK volumes, there are no set rules. Size is specifically application dependent. Additionally, users can code their application to make more (or less) use of the SAS WORK area. Many applications (i.e.: SORT) will need to make complete copies of their input data sets before deleting the original. This temporary copy is usually made in the "working" area and not necessarily in the SAS WORK directory. Since NFS mount points are transparent, users may not realize that access of large data files is occurring at network speeds causing serious application performance degradation as well as severe network congestion. Avoid NFS access of large data sets if performance is a primary concern. Don't overlook the network. Numerous SAS/CONNECT sessions requiring large data transfers can cause network and I/O bottlenecks system wide. Look at the jobs and the amount of data being transferred. Can the data be subset before being brought over the wire? Do more network interfaces need to be added to the system allowing for more network bandwidth? If using 100 Mb Ethernet, a switched hub is essential. Or maybe you could move to a 1000 Mb (Gigabit) Ethernet for increased throughput. RAID issues - RAID 5, Striping/Mirroring, and Stripe interlace Areas that are write intensive (typically WORK) should avoid using software RAID 5(parity) and configure with RAID 0+1 (striped mirrors) or RAID 0(striped). Due to the nature of 7 of 40

8 RAID 5, a logical write requires on the order of 4-5 I/Os. Additionally, the parity calculations require a non-trivial amount of CPU time. As a test, using a simple SAS data step, we copied a dataset (~1 GB in size) to a striped partition (RAID 0) and then to a RAID 5 partition (using software RAID). Real User System RAID 0 2: :02.6 RAID 5 11: :47.9 RAID 0 vs. RAID 5 data set copy Most modern storage platforms like the Sun StorEdge Family now have hardware RAID 5 accelerators and Cache built into the device and can provide RAID 5 performance comparable to that of RAID 0. RAID 0 (striping) offers no redundancy and thus should obviously not be used in 24x7 mission critical environments. Logical Volume Stripe Unit - Most workgroup and enterprise server configurations use a logical volume manager such as the VERITAS Volume Manager or Sun s Solstice Disksuite to configure and manage the storage platforms. In a striped configuration, the volume unit size (also known as the interlace, chunk size, segment size) times the number of columns or disks equal the stripe width. What stripe unit should be chosen? In the absence of knowing anything about the application, we simply suggest choosing 64K. With large blocked sequential I/O, its best to choose a moderate size stripe unit. This allows more I/O's to be spread out across all the spindles of the stripe column and for the read-ahead buffers to be handled more efficiently. From observations using truss(1m), you might notice that a SAS application will typically issue relatively small read/write requests, usually on the order of 8K or 16K. Thus, 64K or 128K should be a good target stripe unit. Classical SAS applications often depend on sequential I/O - i.e.: performing multiple iterations, record by record, over an entire data set. However, certainly, making extensive use of indexed data sets will usually not be sequential at all in nature. So, even in cases where random I/O may dominate (i.e.: using indexes with high cardinality in the data which return small result sets), a stripe unit of 64k is still generally a reasonable choice. SWAP Configuration What about using Sun s tmpfs as a WORK area? There are times when this can be a performance win but in general, this should be avoided if working with large data sets. Tmpfs is a memory based file system which is backed by the SWAP partition. If you write large data files to this partition, you could easily induce paging. Configure plenty of SWAP if using PROCs which have large virtual memory requirements. Many SAS procedures make extensive use of the mmap(2) system call. For every mmap'd memory segment a corresponding amount of SWAP must be reserved even if it is not used. The pages are not allocated until needed. A number of PROCs will return "insufficient memory" errors if the SWAP reservation cannot be made. Thus you could be the only user on 8 of 40

9 a quiet system configured with 8 GB of RAM and get an insufficient memory error if you do not have a large enough SWAP area. Some PROCs may produce unexpected results. For instance, we discovered in experimenting with different SORTSIZE values, that certain runs were using a variable amount of memory despite the fact that we knew exactly how much should have been used. We realized that our SWAP area was much too small to back the mmap requests depending on what else was going on system wide.. After allocating more SWAP area, the programs utilized the expected amount of memory. You can check for the amount of SWAP configured with the swap(1m) command (swap -l) and the amount reserved with swap -s. Note that the free space on the SWAP device does not equal the free SWAP available. This is because of the swapfs file system, /tmp. This partition is a combination of physical SWAP space and free memory. In the example below, the SWAP device reports 1.0+GB free while swap -s reports 1.7GB available swap -l gives the actual amount available on disk. $ swap l swapfile dev swaplo blocks free /dev/dsk/c1t10d0s1 32, $ swap s total: 33800k bytes allocated k reserved = 39288k used, k available If users specify a large MEMSIZE, you must have a cumulative amount of SWAP area set aside. If in the unfortunate case where you do have paging to your SWAP area, configure your SWAP devices similar to your data areas. Optimize I/O- Rather than 3 or 4 single SWAP devices, consider a RAID 0 (striping) configuration. If you have a 7x24 environment, then RAID 0 will obviously not satisfy availability requirements. You can dynamically create and add SWAP areas with the mkfile(1m) and swap -a commands. Obviously, you want to avoid doing this on a disk partition that is under heavy load. As discussed in the section on memory, you can monitor the SWAP device as a simple way to check for paging. SAS BUFSIZE/BUFNO Options If you truss(1m) your SAS processes, you may find that most of the read/write(2) system calls use a fairly small buffer size, probably 8k or 16k. You may find that performance of certain jobs is increased if you increase the BUFSIZE option in the SAS System for a specific data set. Note, this BUFSIZE setting is specific to a SAS data set and its value is stored in the metadata header. Thus, the only way to change the buffer size would be to use a data step to copy the data set and specify the new BUFSIZE: data new (BUFSIZE=64k) set libname.old; run; You can query the BUFSIZE of the data set by issuing a PROC CONTENTS: proc contents data=libname.dataset; run; 9 of 40

10 In our experimentation, we found a small but not appreciable performance benefit to increasing BUFSIZE. However, one of our large enterprise customers in the medical insurance industry found significant improvement in performance when they aligned the SAS BUFSIZE and the data volume strip unit on their storage device. We need to mention that this testing and running of jobs was done in a very controlled environment. If you have a well characterized job or set of jobs and you have this ability to regulate the runs, you might find that changing BUFSIZE produces good performance gains. The Solaris Buffer Cache (also called the File System Cache) What is the Solaris Buffer Cache? The Sun Solaris operating system actually utilizes free system memory to cache writes to the UFS file system. This allows it to take full advantage of unused memory in order to help organize file system writes and prevent multiple reads on commonly used data. The Solaris file buffer cache lessens the need to modify BUFSIZE. An area where changing BUFSIZE could potentially benefit would be if truss(1m) were to show that a file was opened synchronously(o_sync) and sync(2) was being issued to flush the contents of the output buffers. Note, that truss(1m) with Solaris 7 or higher can show library calls so this could show up as an fsync(3c).be careful in experimenting with this parameter as a number of PROCs make algorithmic decisions based on the value of BUFSIZE which could adversely affect performance. Although intuitively, you may want the SAS System to take advantage of larger I/O buffer capabilities that are more typical of today's enterprise configurations, it is probably, on the average, not wise to change BUFSIZE as a blanket policy. For large data sets, it is a fairly expensive experiment since multiple copies of the data set are required during the testing phase. To show the effects of reading a data set from the Solaris file buffer cache we show timings from 2 consecutive runs where we read our household data set (but do no writes): data _null_; set gold.hrecs(keep=state); /* no state codes of '00' */ where state = '00'; run; Real User System 1st time 2: nd time Comparison of data set read from disk and buffer cache Obviously, there is a big advantage to being able to access files from the Solaris buffer cache. We can use the Memtool (see Appendix C on how to obtain this tool) command prtmem to show the distribution of memory. Before the file copy above, prtmem would show something similar to: 10 of 40

11 $ prtmem Total memory: Kernel Memory: Application memory: Executable memory: Buffercache memory: Free memory: 1468 Megabytes 125 Megabytes 13 Megabytes 32 Megabytes 184 Megabytes 1112 Megabytes Note, that there is 180+ MB of memory being used for the buffer cache and 1.1 GB free $ prtmem Total memory: Kernel Memory: Application memory: Executable memory: Buffercache memory: Free memory: 1468 Megabytes 125 Megabytes 13 Megabytes 32 Megabytes 1272 Megabytes 24 Megabytes After the data set copy, free memory went to 24 MB and buffer cache memory was 1.2 GB. So, this is why the free memory shown in vmstat(1) can sometimes be misleading The same is true for the page-in and page-out columns. There is plenty of memory available and as long as no one is requesting it, memory will be used for the buffer cache. This is directly relevant to the memory discussions as well as the priority paging section. If the size of the data sets are larger than the file cache, it is possible that the buffer cache could get in the way. See the section on Direct I/O below. In our experience, we found that modifying the SAS System BUFNO option made no difference in performance nor did if seem to make a difference in the amount of memory used as reported by FULLSTIMER. SAS BUFSIZE and BUFNO make more of difference in legacy mainframe environments. Data Set Compression The SAS system has an option to compress data sets at creation by either specifying (COMPRESS=yes) in the data step or by using OPTIONS COMPRESS=YES;. Compression can typically save 2X the amount of space that a data set might normally occupy. However, there is no such thing as a free lunch. Saving disk space comes at the expense of increased CPU time to compress the data as it is written and/or decompress the data as it is read. If your system is CPU bound then compressed data sets should probably not be used. However, if your applications have a dominant I/O component, you may want to consider the compress option. We have a documented case of ~15% performance increase using compressed data sets versus uncompressed data sets. An indication that an application has a large I/O component is either a large system time as reported by FULLSTIMER or large differential between Real time and (User + System) time. However, this could also be attributed to excessive paging as well. 11 of 40

12 File System UFS tuning In general, there is no tuning to be done. If a large data volume is created to hold large files (as opposed to many small ones), consider using the option to decrease the number of inodes. The default is to create 1 inode per 2K of file space. If changed use to 1 inode per 32k, it not only provides more space for the file area but can decrease the fsck(1m). Also, for large volumes, you might consider decreasing the free space percentage from 10% to perhaps 2-3%. An example of a newfs(1m) command to do these 2 suggestions: # /usr/sbin/newfs -v -i 32k -m 3 <device name> Direct I/O When direct I/O is enabled via a mount(1m) option, data is transferred directly into the user buffer space and the transfer to the kernel buffer cache is avoided. If doing large amounts of sequential I/O, this could be helpful. However, in our experimentation, we saw the opposite effect. Doing a simple data set copy, we saw the copy go from ~35 seconds to over 3 minutes when the file system was mounted with the -forcedirectio option. The Solaris file system buffer cached the data and was able to more efficiently service the read system calls. Even in very large tests with Direct I/O turned on we did not see a benefit. Leaving direct i/o off (Solaris Buffer Cache enabled) even under heavy load was beneficial to system performance. Other Kernel Parameters In general, there is no Solaris kernel tuning that needs to be done. Unlike DBMS configurations or other applications, where you might need to increase shared memory parameters or other kernel parameters, this is not needed in typical SAS applications. Historically, many people thought it necessary to tune the Solaris buffer cache. On other Unix systems, this was done by changing a kernel parameter, often bufhwm. The parameter is used exclusively for UFS file system metadata and has little to no affect on the memory/file I/O page cache. Other Commercial File Systems There are several other commercial file systems available on the market. All of which have various features and benefits that can make the lives of System Administrators easier. Examples of these are QFS and Veritas VXFS. Some performance benefit could be gained by using these with various SAS applications. It is important to note that the Sun UFS file system has had some major performance improvement over previous versions of the Solaris operating system. In some ways it can perform as well as or better than other commercial file systems used with Sun Solaris. Choosing the Right Storage I/O performance isn t free it takes CPU time to make it happen. The overall goal is to achieve high bandwidth I/O with as little as possible CPU utilization. 12 of 40

13 The lowest CPU cost for I/O is native file system implementations (such as Solaris s UFS) running on top of a direct attached storage subsystem or a storage area network (SAN). The native file system is tightly integrated with the host operating system and is highly tuned to minimize I/O cost on CPUs. The next lowest CPU cost I/O is native implementations of Network File System implementations. NFS implementations are tightly integrated with the host operating system. This functionality has CPU cost for protocol overhead not associated with native file systems. There is research aimed at removing some of the network overhead to more closely match native file system performance. The most expensive I/O comes from third party file system implementations layered on top of either direct attached JBODs or SAN attached platforms. These file systems are not as tightly integrated with the operating system as native file systems or NFS implementations. Occasional or small data transfer is only suitable for this type of I/O subsystem. I/O Summary The classical SAS program typically utilizes sequential processing on large data sets. Thus, configure your data areas for blocked sequential I/O and your WORK area for write intensive applications. Since the WORK area can be configured as a system wide resource, ensure that the I/O channels back to the host are sufficiently wide. Sometimes adding CPU resources could cause an overall system performance degradation in that they create an increased I/O burden that pushes the system over a previously optimal I/O configuration. When adding CPUs, ensure that the I/O subsystem can handle the increased requirements. The data volume layout can make a substantial difference in performance. Although using a storage platform which had 4 times the bandwidth to the host and disks which had almost 2 times the rotational speed, I/O intensive tests took longer on the faster platform when the data was not striped. In other words, a significantly slower storage platform outperformed a faster storage platform when the underlying volume on the slower storage was configured more optimally. Although the SAS System tends to issue relatively small read/write requests, choose a moderate size for the stripe unit. Using 64K is probably a good general rule of thumb. Don't change the SAS System BUFSIZE and BUFNO options unless you can do so in a controlled environment. Consider compressing SAS data sets if applications are dominated by a large I/O component. Note that compressing costs in terms of requiring addition CPU resources. 13 of 40

14 7. Memory Baseline Memory Recommendations From the hardware/os point of view, the area of memory is one of the most important for performance increases and decreases as well as for predictable performance under load. You can think of memory as money in the bank. Consider a family of 5, all of whom make deposits as well as withdrawals. Given that overall monthly cash in versus cash out is positive, there could still be many times where there might not be enough cash at any given moment to cover requested withdrawals. The issue of memory is the same. Continuing the bank account analogy, where there may be automatic monthly debits to the account, administrators and users must also take into account other system overhead when gathering memory usage requirements. You can have sufficient CPU cycles and an optimally configured I/O subsystem, but without sufficient memory resources, performance will be dismal, if your application runs at all. The bottom line is that you want the sum total of requested memory + plus system overhead to fit within the confines of your physical RAM configuration. This is absolutely crucial for predictable performance under load. As a ballpark starting point, we generally recommend 1-2 GB of memory per CPU. On a more granular level, allocate MB for base Solaris, MB for the windowing system and 32 MB per SAS session. We will discuss how any individual SAS session could very easily need to be increased to.5 GB or even over 2 GB. To accumulate SAS session memory requirements on a system level, we must know: How many users, or more specifically SAS processes, are running concurrently? For each SAS process, what is the maximum memory requirement as reported by SAS FULLSTIMER? This sounds as if it should be fairly simple, but in practice it is practically impossible to find this number. It is even more difficult on an active multi-user system. Paging The Solaris memory system provides virtual memory management on a demand paged basis. The swap device is used as a physical device for backing store for this virtual address space. In many cases, the SAS software platform does virtual memory management on behalf of the SAS process. Memory is typically mmap(2)ed up to the limit of MEMSIZE if the specific application requires it or believes it can take advantage of it. While Solaris does an excellent job of paging, allowing the SAS virtual memory management to page on top of Solaris is completely detrimental to application performance. 14 of 40

15 The key to predictable performance is to allow each SAS user a portion of memory such that the memory requirements for all concurrent SAS users can be resident in memory. Let's examine paging from both the system and SAS users perspective. From a system perspective: Here are 2 "quick & dirty" ways to determine if your system is experiencing excessive paging using tools bundled with Solaris: vmstat - consistently large values in the "sr" (scan rate) column could indicate a memory shortage; specifically, scan rates of more than pages per second for extended periods of time. look for activity on the swap device (individual disk partition) with iostat Users can check the memory usage of their own applications with a very useful utility called prstat (see Appendix C for more details). An example of prstat output is below: Note there is one SAS session running on this system. Of particular interest, look at the SIZE and RSS columns. SIZE represents the amount of virtual address space and RSS represents the amount of memory which is actually resident. A simple way to think about whether you have enough memory is to sum the SIZE values or virtual address space (the amount all processes want) and the RSS values (the amount that all processes have). If the 2 values are roughly equal, (Requested = Actual Size), then you have enough memory. However, if the actual (RSS) is much less than requested (SIZE), then paging is likely an issue. The sum of the RSS column should be less than the amount of physical RAM in your system or you could have a memory shortage. The (Memtool) prtmem command (see Appendix C) is also very useful in showing the break down of application memory and Solaris buffer cache memory. 15 of 40

16 $ prtmem Total memory: Kernel Memory: Application memory: Executable memory: Buffercache memory: Free memory: 1468 Megabytes 117 Megabytes 14 Megabytes 41 Megabytes 7 Megabytes 1287 Megabytes How Bad Does it Hurt? With prstat we just saw a relatively quiet system (just one SAS job running). Now, we start 2 SAS jobs where each will eventually request 1.2 GB of memory. These 2 jobs by themselves will collectively request 2.4 GB memory, but on this system we only have 1.5 MB of physical RAM. This particular proc (MDDB) requires a minimum MEMSIZE such that the N-way cube will fit in virtual memory (1.2 MB in this case). If our system only had.5 MB of memory and we were just running 1 job, you would have no choice but to allow Solaris to page on behalf of the SAS System (utilize virtual memory and the SWAP device). And vmstat is also showing a memory shortage with very high scan rates (sr). When you see scan rates higher than 300 for extended periods it can be a sign of memory shortage: $ vmstat 5 procs memory page disk faults cpu r b w swap free re mf pi po fr de sr s6 sd sd sd in sy cs us sy id Note that you could also run prstat on the system and watch the RSS and SIZE values for the SAS jobs. If these combined together exceed your physical RAM size, you are swapping. Look at how this memory "shortage" affects performance. When 1 job is run, this proc runs in about 12 minutes (see real time below): NOTE: PROCEDURE MDDB used: time: memory: real 12: page faults 3882 user cpu 7: page reclaims 0 system cpu 2: usage M When 2 jobs are run, there should be plenty of CPU bandwidth since this is an 8 way system. However, due to the memory contention, the time to run goes from approximately 12 minutes to over 1 hour. All this extra time was spent paging. Recall that we had 1.5 GB RAM and the 2 jobs by themselves wanted 2.4 GB. job 1 1:05:16 job 2 1:14:33 Time Job times when memory constrained 16 of 40

17 Let's look at this in another way. We'll show how a faster system can produce slower results if there is a memory shortage. We have 2 different MDDB procs which respectively requires memory of 547 MB and 1.1 GB. Our "slower" system has 250 MHz processors and 1.5 GB RAM and our "faster" system has 300 MHz processors but with only 1 GB RAM. "Slower" system 1.5 GB RAM "Faster" system 1 GB RAM MDDB proc MB MDDB proc MB 7: :09.9 6: :52.4 How faster CPUs be affected by memory shortages We expect a near linear increase in performance for CPU bound applications. Thus we expect the "faster" system to outperform the "slower" system. The results show that this is true when the problem "fit" in memory, but when it didn't, the time practically doubled and the "faster" system actually ran much slower. We don't intend to single out proc MDDB in our examples. There are other SAS PROCs which can potentially require a large minimum amount of memory in order to run. Some examples include IML, GLM, and certain data mining procs. SWAP - A Necessary Evil, Gotta' have it, even if you don't use it As mentioned in the I/O Section earlier, plenty of SWAP space must be allocated. How much can only be determined collectively by the users and system administrators. There must be 17 of 40

18 enough to back all mmap(2)ed requests or jobs could fail with insufficient memory. Thus, even if you have enough memory for jobs to be resident in memory, you could likely get an insufficient memory error condition if there is not enough SWAP space to service the reservation. Given that any user can ask for an arbitrarily large memory footprint by specifying MEMSIZE=<BIG VALUE> or MEMSIZE=0, it is not immediately obvious to systems administrators how much SWAP to allocate. Also, don't confuse virtual memory with real memory. Physical memory accesses is several orders of magnitude faster than a disk access. The main goal is to eliminate or minimize the activity to the SWAP device (Virtual Memory = Physical Ram + SWAP). Increasing MEMSIZE/SORTSIZE - when does it help, when does it not? There are 2 options when running SAS applications which can control memory usage; MEMSIZE and SORTSIZE. They are set at 32 MB and 16 MB respectively in the default configuration file (sasv8.cfg or sasv9.cfg depending on the version of the SAS software). MEMSIZE is the total amount of memory that a SAS application could allocate on behalf of a SAS process. The SORT procedure would use up to SORTSIZE amount of memory so as a general rule of thumb, MEMSIZE should be at least (SORTSIZE + ~4 MB) just to ensure that there is enough memory to meet the SAS requirements. Let's take a closer look at MEMSIZE. MEMSIZE is an upper limit. Consider 3 categories of SAS procs when looking at individual MEMSIZE settings: i) Uses a small fixed amount of memory regardless of the value of MEMSIZE ii) Uses more memory as MEMSIZE increases iii) Requires a *minimum* amount of memory based on data set size or other programmatic requirement i) Uses a small fixed amount of memory regardless of the value of MEMSIZE If you have MEMSIZE set to a large value and the FULLSTIMER option reports some small value of memory used, then increasing MEMSIZE won't help. For instance memory: usage 57 K Many data steps and procs (freq, tabulate, etc) use only a small amount of memory. However, if amount of memory reportedly used is close to MEMSIZE, increasing MEMSIZE may help. ii) Uses more memory as MEMSIZE increases The SORT procedure is a good example. It will use as much memory as specified in SORTSIZE. However, in general, unless the entire data set can fit in memory, performance will remain flat despite the fact that more memory is used. Thus, you could hurt overall system performance by consuming more memory even though your application experiences no benefit. Below are the results of SORT on our 1 GB household data set. 18 of 40

19 SORTSIZE Memory Used Time 16m MB 8: m MB 8: m MB 8: m MB 8: m MB 8: m MB 8: m MB 9: m MB 8: m MB 3:30.72 Effects of Changing SORTSIZE As you can see, the times were flat around 8-9 minutes until the data set fit in memory and then it went down to 3+ minutes. Unless you are sure it will fit in memory AND you won't contend with other user requirements, don't change SORTSIZE. There is an undocumented SORT parameter, UBUFSIZE; however, in our testing scenario we did not see any performance benefit from changing it from its default value of 8K. SORTSIZE could exceed the amount of physical RAM or the amount of available memory at a given time. In this case the SAS System would be relying on the virtual memory paging facility of Solaris and performance will end up potentially be much worse than specifying a small or default SORTSIZE of 16 MB. If SORT is not depending on a complete in-memory sort, the resulting runtime will typically be dominated by the I/O component. This can be seen as either a proportionally large system 19 of 40

20 time in the -FULLSTIMER output. Alternatively, a result where the REAL time component is larger than the sum of the USER + SYSTEM components demonstrates the same effect. In this case, the I/O configuration will be critical to maximizing performance. Similar affects can be seen with the LOGISTIC procedure. In this case, the threshold was basically the size of the data set. If MEMSIZE is set to something less than the size of the data set, only 2 MB memory was used. The performance was only marginally slower when 2 MB memory was used than when the whole data set fit in memory. Thus it is arguable whether the benefit gained is worth the cost of memory consumption. MEMSIZE Setting Memory Used Time 16m 1.3 MB 7:01:12 32m 1.3 MB 7:11:51 64m 1.3 MB 7:14:42 128m 1.3 MB 7:07:31 256m 1.3 MB 7:08:01 512m 416 MB 6:28:32 Changing MEMSIZE in LOGISTIC In this case, 30 minutes out of 7 hours was saved by fitting the entire data set in memory. The cost for this savings was basically 500 MB of memory, about half the memory available to all users of the system. Similar to the SORT procedure, the PHREG procedure (used for survival analysis studies) shows the same behavior, though in a more stair step fashion. Although, the data set to fit in memory on the upper MEMSIZE values, we never saw any real increase in performance. Thus, increasing MEMSIZE in this case, only hurt because it consumed memory resources and making less available for other system processes. Memory Used Time MEMSIZE Setting 16m 15.0 MB 3:07 32m 30.1 MB 3:04 64m 40.4 MB 3:01 128m 40.4 MB 3:02 256m MB 3:09 512m MB 3:09 768m MB 3:09 Changing MEMSIZE in PHREG 20 of 40

21 Not to revisit the paging issues discussed above, here is an example of proving the value of finding the "sweet spot" for the MEMSIZE setting. We had a large healthcare drug safety application using the IML procedure. Their system had 1.5GB RAM. In initial testing, MEMSIZE values of 1 GB and 2 GB were used. When 1 GB was used, the job did not run due to insufficient memory. Because they weren't sure exactly how much memory proc IML required in their situation, MEMSIZE of 2 GB was used. When set at 2 GB (.5 GB over their physical RAM configuration), the job ran in 23 hours with user + system time coming in around 13 hours. It was suggested that they try to discover a MEMSIZE value which would allow the proc to run that was under the boundary of the their physical RAM configuration. If it wasn't possible, they would have to rely on the virtual memory system. However, there was good news in that using a MEMSIZE setting of 1.2 GB, the real time was brought down to 13 hours thus reducing their job time by 10 hours. It was definitely worth their trial and error effort to discover the "minimal maxima" memory requirement. Thus, you can increase MEMSIZE (or SORTSIZE) with the invocation: <SAS_INSTALL_DIR>/sas -memsize 1024m myprog.sas This kind of testing is made more difficult if other people or processes are contending for system resources at the time of your tests. If you have a performance baseline, you can still do valid comparisons. The user and system times should remain about the same between runs regardless of CPU and memory contention. If the real or wall clock time wildly differs between runs, then it is likely there is contention for CPU, I/O or memory resources. Summary In general, if an application is not particularly, I/O intensive, and your wall clock time is 2 or more times the combined user + system time, look closely for CPU and memory/paging constraints. Also be sure that you have enough backing store (SWAP) or you may see unpredictable results. Only the user and system administrators together can determine the cost/benefit ratio of increasing memory resources. Increase MEMSIZE carefully and judiciously. If you know that you can benefit from an increase in memory, evaluate its effects on a system wide basis. More often than not, in this case, using a smaller MEMSIZE will gain you more predictable performance when the system is under load. Think of it as a large sandbox with a lot of toys. The more kids in there, the fewer toys there will be to play with. It's a lot more pleasant for all the kids if they are to share rather than to fight and try to get toys exclusively for themselves. In certain cases, it will be quite possible that every page the SAS application asks for causes a page fault, and performance will be dismal. If the application can't be recoded (fewer iterations or fewer variables used), the only way to improve performance would be to add 21 of 40

22 memory to the system. If these jobs are critical for your job function, then it should be justifiable to increase the system memory size. We have hopefully demonstrated the effects of paging. When asked "Do SAS applications require a lot of memory?"; the answer is, "In general, no, SAS applications do not require a lot of memory". For most procedures, the SAS System works extremely well in low memory configurations. However, specific applications can have large memory requirements either by user directive or if there are problems to solve which require large memory configurations. 8. CPU Does clock speed make a difference? Here we look at a very commonly used SAS procedure which is particularly CPU intensive. Logistic regression is used to find patterns among the data and is often used in data mining and decision support applications. A forward, stepwise and backward logistic regression was run on a data set which had 200,000 observations, ~500 variables, and a record length of about 1500 bytes. The total size of the data set was ~320 MB. We ran the tests on different Sun UltraSPARC Servers at clock speeds of 167 MHz, 250 MHz, 300 MHz, 336 MHz and 750 MHz. The external cache varied on these systems from.5 MB to 8 MB which might slightly affect the results but was probably insignificant relative to the total processing time. Hours : Minutes : Seconds Regression Test Results Using Different Speed of Processors CPU Speed 48:00:00 36:00:00 24:00:00 12:00:00 0:00:00 forward stepwise backward Type of Regression Test 167 MHz 250 MHz 300 MHz 336 MHz 750 MHz 22 of 40

23 Regression Test Results Using Different Speed of Processors Hours : Minutes : Seconds (captured from SAS FULLSTIMER option) 167 MHz 250 MHz 300 MHz 336 MHz 750 MHz forward 9:43:32 7:12:24 6:28:33 5:24:37 3:12:15 stepwise 10:03:59 7:28:22 7:08:08 5:43:15 3:47:28 backward 40:54:58 30:31:33 26:50:03 22:57:47 14:09:19 Note a near linear increase in performance times; as the clock speed increases. This speaks extremely well for the UltraSPARC processors scalability as the MHz increases. At the time of this writing Sun UltraSPARC processors are now available running at speeds in excess of 1 GHz (we hope to rerun this test on newer processors shortly). Keep in mind this is purely a CPU intensive application that is not very effected by I/O or Memory. CPU contention Let's examine the timesharing effects on CPU bound applications. How are application times affected when there are multiple jobs contending for a single CPU? We used PROC GLM (general linear model) which took about ~1 hour to run a single job and used about ~270MB of memory. real user system 1:07:01 1:03: seconds We ran 4 of these jobs with 4 CPUs enabled. Note, since each job required ~270 MB or RAM and the system had 1.5 GB RAM, the sum total memory required easily fit within the physical RAM configuration. Since our system had 8 processors, 4 of them were turned off for the test. Not many system administrators would ever want to turn off processors. But, perhaps it would be a good April Fools joke to turn off 105 processors of a fully configured Sun Fire 15k (aka Starcat). To turn off processors, use the psradm(1m) command; this command must be run as root. Use the -f option to turn off the processors and the -n option to turn them back on. # /usr/sbin/psradm -f # /usr/sbin/psrinfo 0 on-line since 01/22/99 10:01:50 1 on-line since 01/22/99 10:02: on-line on-line since 01/26/99 09:05:20 since 01/26/99 09:05:20 8 off-line since 01/25/99 17:00:26 9 off-line since 01/25/99 17:00:26 23 of 40

24 12 off-line since 01/25/99 17:00:26 13 off-line since 01/25/99 17:00:26 We start the 4 jobs simultaneously and got the following results: Job 1 Job 2 Job 3 Job 4 Real 1:22:37 1:22:34 1:22:43 1:22:34 User 1:20:15 1:20:11 1:20:12 1:20:06 4 Jobs on 4 CPUs Note, the jobs all ran equally well with little performance degradation compared to when only 1 job was run. There are 4 jobs running and 4 CPUs enabled. Don't forget that the Solaris kernel itself needs CPU resources itself not to mention the many other system applications that are running. Thus with no memory contention, and no CPU contention, we get comparable performance. Now, we turn off 2 of the CPUs so that only 2 CPUs are active: # /usr/sbin/psradm -f 4 5 To confirm that only 2 are active: # /usr/sbin/psrinfo 0 on-line since 01/22/99 10:01:50 1 on-line since 01/22/99 10:02: off-line since 01/25/99 22:26:17 off-line since 01/25/99 22:26:17 8 off-line since 01/25/99 17:00: off-line since 01/25/99 17:00:26 off-line since 01/25/99 17:00:26 13 off-line since 01/25/99 17:00:26 As expected, the job times roughly doubled. If 1 job/cpu ran in ~1.3 hours, it would be expected that 2 jobs/cpu would take ~2.5 hours. Job 1 Job 2 Job 3 Job 4 Real 2:33:04 2:42:39 2:37:05 2:42:58 User 1:20:15 1:20:11 1:20:12 1:20:06 4 Jobs on 2 CPUs 24 of 40

The Advantages of Using RAID

The Advantages of Using RAID 1 Quick Guide to the SPD Engine Disk-I/O Set-Up SPD Engine Disk-I/O Set-Up 1 Disk Striping and RAIDs 2 Metadata Area Configuration 3 Assigning a Metadata Area 3 Metadata Space Requirements 3 Data Area

More information

A Survey of Shared File Systems

A Survey of Shared File Systems Technical Paper A Survey of Shared File Systems Determining the Best Choice for your Distributed Applications A Survey of Shared File Systems A Survey of Shared File Systems Table of Contents Introduction...

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

Q & A From Hitachi Data Systems WebTech Presentation:

Q & A From Hitachi Data Systems WebTech Presentation: Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010 Best Practices for Data Sharing in a Grid Distributed SAS Environment Updated July 2010 B E S T P R A C T I C E D O C U M E N T Table of Contents 1 Abstract... 2 1.1 Storage performance is critical...

More information

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID

More information

Virtuoso and Database Scalability

Virtuoso and Database Scalability Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of

More information

How to Maintain Happy SAS Users Margaret Crevar, SAS Institute Inc., Cary, NC Last Update: August 1, 2008

How to Maintain Happy SAS Users Margaret Crevar, SAS Institute Inc., Cary, NC Last Update: August 1, 2008 How to Maintain Happy SAS Users Margaret Crevar, SAS Institute Inc., Cary, NC Last Update: August 1, 2008 ABSTRACT Today s SAS environment has high numbers of concurrent SAS processes and ever-growing

More information

Distribution One Server Requirements

Distribution One Server Requirements Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and

More information

SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform

SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform INTRODUCTION Grid computing offers optimization of applications that analyze enormous amounts of data as well as load

More information

Capacity Planning Process Estimating the load Initial configuration

Capacity Planning Process Estimating the load Initial configuration Capacity Planning Any data warehouse solution will grow over time, sometimes quite dramatically. It is essential that the components of the solution (hardware, software, and database) are capable of supporting

More information

Optimizing LTO Backup Performance

Optimizing LTO Backup Performance Optimizing LTO Backup Performance July 19, 2011 Written by: Ash McCarty Contributors: Cedrick Burton Bob Dawson Vang Nguyen Richard Snook Table of Contents 1.0 Introduction... 3 2.0 Host System Configuration...

More information

Systems Architecture. Paper 271-26 ABSTRACT

Systems Architecture. Paper 271-26 ABSTRACT Paper 271-26 Findings and Sizing Considerations of an Integrated BI Software Solution in an AIX Environment and a Windows NT Environment E. Hayes-Hall, IBM Corporation, Austin, TX ABSTRACT This paper presents

More information

Deploying and Optimizing SQL Server for Virtual Machines

Deploying and Optimizing SQL Server for Virtual Machines Deploying and Optimizing SQL Server for Virtual Machines Deploying and Optimizing SQL Server for Virtual Machines Much has been written over the years regarding best practices for deploying Microsoft SQL

More information

The Bus (PCI and PCI-Express)

The Bus (PCI and PCI-Express) 4 Jan, 2008 The Bus (PCI and PCI-Express) The CPU, memory, disks, and all the other devices in a computer have to be able to communicate and exchange data. The technology that connects them is called the

More information

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation report prepared under contract with Lenovo Executive Summary Love it or hate it, businesses rely on email. It

More information

How to handle Out-of-Memory issue

How to handle Out-of-Memory issue How to handle Out-of-Memory issue Overview Memory Usage Architecture Memory accumulation 32-bit application memory limitation Common Issues Encountered Too many cameras recording, or bitrate too high Too

More information

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator WHITE PAPER Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com SAS 9 Preferred Implementation Partner tests a single Fusion

More information

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top

More information

HP reference configuration for entry-level SAS Grid Manager solutions

HP reference configuration for entry-level SAS Grid Manager solutions HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2

More information

Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices

Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices Sawmill Log Analyzer Best Practices!! Page 1 of 6 Sawmill Log Analyzer Best Practices! Sawmill Log Analyzer Best Practices!! Page 2 of 6 This document describes best practices for the Sawmill universal

More information

How to recover a failed Storage Spaces

How to recover a failed Storage Spaces www.storage-spaces-recovery.com How to recover a failed Storage Spaces ReclaiMe Storage Spaces Recovery User Manual 2013 www.storage-spaces-recovery.com Contents Overview... 4 Storage Spaces concepts and

More information

SQL Server Business Intelligence on HP ProLiant DL785 Server

SQL Server Business Intelligence on HP ProLiant DL785 Server SQL Server Business Intelligence on HP ProLiant DL785 Server By Ajay Goyal www.scalabilityexperts.com Mike Fitzner Hewlett Packard www.hp.com Recommendations presented in this document should be thoroughly

More information

Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server

Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance

More information

HP Smart Array Controllers and basic RAID performance factors

HP Smart Array Controllers and basic RAID performance factors Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array

More information

QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE

QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QlikView Technical Brief April 2011 www.qlikview.com Introduction This technical brief covers an overview of the QlikView product components and architecture

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly

More information

These sub-systems are all highly dependent on each other. Any one of them with high utilization can easily cause problems in the other.

These sub-systems are all highly dependent on each other. Any one of them with high utilization can easily cause problems in the other. Abstract: The purpose of this document is to describe how to monitor Linux operating systems for performance. This paper examines how to interpret common Linux performance tool output. After collecting

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Benchmarking Hadoop & HBase on Violin

Benchmarking Hadoop & HBase on Violin Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages

More information

SAS System for Windows: Integrating with a Network Appliance Filer Mark Hayakawa, Technical Marketing Engineer, Network Appliance, Inc.

SAS System for Windows: Integrating with a Network Appliance Filer Mark Hayakawa, Technical Marketing Engineer, Network Appliance, Inc. Paper 220-29 SAS System for Windows: Integrating with a Network Appliance Filer Mark Hayakawa, Technical Marketing Engineer, Network Appliance, Inc. ABSTRACT The goal of this paper is to help customers

More information

Performance Report Modular RAID for PRIMERGY

Performance Report Modular RAID for PRIMERGY Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers

More information

1 Storage Devices Summary

1 Storage Devices Summary Chapter 1 Storage Devices Summary Dependability is vital Suitable measures Latency how long to the first bit arrives Bandwidth/throughput how fast does stuff come through after the latency period Obvious

More information

Whitepaper: performance of SqlBulkCopy

Whitepaper: performance of SqlBulkCopy We SOLVE COMPLEX PROBLEMS of DATA MODELING and DEVELOP TOOLS and solutions to let business perform best through data analysis Whitepaper: performance of SqlBulkCopy This whitepaper provides an analysis

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Virtual Desktop Infrastructure www.parallels.com Version 1.0 Table of Contents Table of Contents... 2 Enterprise Desktop Computing Challenges... 3 What is Virtual

More information

SIDN Server Measurements

SIDN Server Measurements SIDN Server Measurements Yuri Schaeffer 1, NLnet Labs NLnet Labs document 2010-003 July 19, 2010 1 Introduction For future capacity planning SIDN would like to have an insight on the required resources

More information

AIX NFS Client Performance Improvements for Databases on NAS

AIX NFS Client Performance Improvements for Databases on NAS AIX NFS Client Performance Improvements for Databases on NAS October 20, 2005 Sanjay Gulabani Sr. Performance Engineer Network Appliance, Inc. gulabani@netapp.com Diane Flemming Advisory Software Engineer

More information

Introduction. What is RAID? The Array and RAID Controller Concept. Click here to print this article. Re-Printed From SLCentral

Introduction. What is RAID? The Array and RAID Controller Concept. Click here to print this article. Re-Printed From SLCentral Click here to print this article. Re-Printed From SLCentral RAID: An In-Depth Guide To RAID Technology Author: Tom Solinap Date Posted: January 24th, 2001 URL: http://www.slcentral.com/articles/01/1/raid

More information

RAID Overview: Identifying What RAID Levels Best Meet Customer Needs. Diamond Series RAID Storage Array

RAID Overview: Identifying What RAID Levels Best Meet Customer Needs. Diamond Series RAID Storage Array ATTO Technology, Inc. Corporate Headquarters 155 Crosspoint Parkway Amherst, NY 14068 Phone: 716-691-1999 Fax: 716-691-9353 www.attotech.com sales@attotech.com RAID Overview: Identifying What RAID Levels

More information

Web Application s Performance Testing

Web Application s Performance Testing Web Application s Performance Testing B. Election Reddy (07305054) Guided by N. L. Sarda April 13, 2008 1 Contents 1 Introduction 4 2 Objectives 4 3 Performance Indicators 5 4 Types of Performance Testing

More information

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0 Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without

More information

IBM Systems and Technology Group May 2013 Thought Leadership White Paper. Faster Oracle performance with IBM FlashSystem

IBM Systems and Technology Group May 2013 Thought Leadership White Paper. Faster Oracle performance with IBM FlashSystem IBM Systems and Technology Group May 2013 Thought Leadership White Paper Faster Oracle performance with IBM FlashSystem 2 Faster Oracle performance with IBM FlashSystem Executive summary This whitepaper

More information

Practical issues in DIY RAID Recovery

Practical issues in DIY RAID Recovery www.freeraidrecovery.com Practical issues in DIY RAID Recovery Based on years of technical support experience 2012 www.freeraidrecovery.com This guide is provided to supplement our ReclaiMe Free RAID Recovery

More information

Getting the Most Out of Your High-End UNIX and NT Server with SAS

Getting the Most Out of Your High-End UNIX and NT Server with SAS Getting the Most Out of Your High-End UNIX and NT Server with SAS Margaret Crevar Corporate Technology Center SAS Institute Leigh Ihnen Numerical Architecture & Performance SAS Institute Gary Mehler Technology

More information

my forecasted needs. The constraint of asymmetrical processing was offset two ways. The first was by configuring the SAN and all hosts to utilize

my forecasted needs. The constraint of asymmetrical processing was offset two ways. The first was by configuring the SAN and all hosts to utilize 1) Disk performance When factoring in disk performance, one of the larger impacts on a VM is determined by the type of disk you opt to use for your VMs in Hyper-v manager/scvmm such as fixed vs dynamic.

More information

Moving Virtual Storage to the Cloud. Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage

Moving Virtual Storage to the Cloud. Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage Moving Virtual Storage to the Cloud Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage Table of Contents Overview... 1 Understanding the Storage Problem... 1 What Makes

More information

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters COSC 6374 Parallel Computation Parallel I/O (I) I/O basics Spring 2008 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network

More information

TECHNICAL BRIEF. Primary Storage Compression with Storage Foundation 6.0

TECHNICAL BRIEF. Primary Storage Compression with Storage Foundation 6.0 TECHNICAL BRIEF Primary Storage Compression with Storage Foundation 6.0 Technical Brief Primary Storage Compression with Storage Foundation 6.0 Contents Introduction... 4 What is Compression?... 4 Differentiators...

More information

Seradex White Paper. Focus on these points for optimizing the performance of a Seradex ERP SQL database:

Seradex White Paper. Focus on these points for optimizing the performance of a Seradex ERP SQL database: Seradex White Paper A Discussion of Issues in the Manufacturing OrderStream Microsoft SQL Server High Performance for Your Business Executive Summary Microsoft SQL Server is the leading database product

More information

WHITE PAPER BRENT WELCH NOVEMBER

WHITE PAPER BRENT WELCH NOVEMBER BACKUP WHITE PAPER BRENT WELCH NOVEMBER 2006 WHITE PAPER: BACKUP TABLE OF CONTENTS Backup Overview 3 Background on Backup Applications 3 Backup Illustration 4 Media Agents & Keeping Tape Drives Busy 5

More information

Monitoring Databases on VMware

Monitoring Databases on VMware Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com

More information

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman. WHITE PAPER All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.nl 1 Monolithic shared storage architectures

More information

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study CS 377: Operating Systems Lecture 25 - Linux Case Study Guest Lecturer: Tim Wood Outline Linux History Design Principles System Overview Process Scheduling Memory Management File Systems A review of what

More information

Moving Virtual Storage to the Cloud

Moving Virtual Storage to the Cloud Moving Virtual Storage to the Cloud White Paper Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage www.parallels.com Table of Contents Overview... 3 Understanding the Storage

More information

Performance and Tuning Guide. SAP Sybase IQ 16.0

Performance and Tuning Guide. SAP Sybase IQ 16.0 Performance and Tuning Guide SAP Sybase IQ 16.0 DOCUMENT ID: DC00169-01-1600-01 LAST REVISED: February 2013 Copyright 2013 by Sybase, Inc. All rights reserved. This publication pertains to Sybase software

More information

PEPPERDATA IN MULTI-TENANT ENVIRONMENTS

PEPPERDATA IN MULTI-TENANT ENVIRONMENTS ..................................... PEPPERDATA IN MULTI-TENANT ENVIRONMENTS technical whitepaper June 2015 SUMMARY OF WHAT S WRITTEN IN THIS DOCUMENT If you are short on time and don t want to read the

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

Backup architectures in the modern data center. Author: Edmond van As edmond@competa.com Competa IT b.v.

Backup architectures in the modern data center. Author: Edmond van As edmond@competa.com Competa IT b.v. Backup architectures in the modern data center. Author: Edmond van As edmond@competa.com Competa IT b.v. Existing backup methods Most companies see an explosive growth in the amount of data that they have

More information

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW 757 Maleta Lane, Suite 201 Castle Rock, CO 80108 Brett Weninger, Managing Director brett.weninger@adurant.com Dave Smelker, Managing Principal dave.smelker@adurant.com

More information

VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i. Performance Report

VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i. Performance Report VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i Performance Report V E R I T A S W H I T E P A P E R Table of Contents Introduction.................................................................................1

More information

Capacity Planning for Microsoft SharePoint Technologies

Capacity Planning for Microsoft SharePoint Technologies Capacity Planning for Microsoft SharePoint Technologies Capacity Planning The process of evaluating a technology against the needs of an organization, and making an educated decision about the configuration

More information

Virtual server management: Top tips on managing storage in virtual server environments

Virtual server management: Top tips on managing storage in virtual server environments Tutorial Virtual server management: Top tips on managing storage in virtual server environments Sponsored By: Top five tips for managing storage in a virtual server environment By Eric Siebert, Contributor

More information

Storage Technologies for Video Surveillance

Storage Technologies for Video Surveillance The surveillance industry continues to transition from analog to digital. This transition is taking place on two fronts how the images are captured and how they are stored. The way surveillance images

More information

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters COSC 6374 Parallel I/O (I) I/O basics Fall 2012 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network card 1 Network card

More information

VDI Solutions - Advantages of Virtual Desktop Infrastructure

VDI Solutions - Advantages of Virtual Desktop Infrastructure VDI s Fatal Flaw V3 Solves the Latency Bottleneck A V3 Systems White Paper Table of Contents Executive Summary... 2 Section 1: Traditional VDI vs. V3 Systems VDI... 3 1a) Components of a Traditional VDI

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) IntelliMagic, Inc. 558 Silicon Drive Ste 101 Southlake, Texas 76092 USA Tel: 214-432-7920

More information

OPTIMIZING VIRTUAL TAPE PERFORMANCE: IMPROVING EFFICIENCY WITH DISK STORAGE SYSTEMS

OPTIMIZING VIRTUAL TAPE PERFORMANCE: IMPROVING EFFICIENCY WITH DISK STORAGE SYSTEMS W H I T E P A P E R OPTIMIZING VIRTUAL TAPE PERFORMANCE: IMPROVING EFFICIENCY WITH DISK STORAGE SYSTEMS By: David J. Cuddihy Principal Engineer Embedded Software Group June, 2007 155 CrossPoint Parkway

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE

WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE WHITE PAPER BASICS OF DISK I/O PERFORMANCE WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE This technical documentation is aimed at the persons responsible for the disk I/O performance

More information

Parallels Cloud Server 6.0

Parallels Cloud Server 6.0 Parallels Cloud Server 6.0 Parallels Cloud Storage I/O Benchmarking Guide September 05, 2014 Copyright 1999-2014 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings

More information

PARALLELS CLOUD SERVER

PARALLELS CLOUD SERVER PARALLELS CLOUD SERVER An Introduction to Operating System Virtualization and Parallels Cloud Server 1 Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating System Virtualization...

More information

white paper Capacity and Scaling of Microsoft Terminal Server on the Unisys ES7000/600 Unisys Systems & Technology Modeling and Measurement

white paper Capacity and Scaling of Microsoft Terminal Server on the Unisys ES7000/600 Unisys Systems & Technology Modeling and Measurement white paper Capacity and Scaling of Microsoft Terminal Server on the Unisys ES7000/600 Unisys Systems & Technology Modeling and Measurement 2 This technical white paper has been written for IT professionals

More information

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for redundant data storage Provides fault tolerant

More information

Benchmarking Cassandra on Violin

Benchmarking Cassandra on Violin Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract

More information

Configuring EMC CLARiiON for SAS Business Intelligence Platforms

Configuring EMC CLARiiON for SAS Business Intelligence Platforms Configuring EMC CLARiiON for SAS Business Intelligence Platforms Applied Technology Abstract Deploying SAS applications optimally with data stored on EMC CLARiiON systems requires a close collaboration

More information

Delivering Quality in Software Performance and Scalability Testing

Delivering Quality in Software Performance and Scalability Testing Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,

More information

The Classical Architecture. Storage 1 / 36

The Classical Architecture. Storage 1 / 36 1 / 36 The Problem Application Data? Filesystem Logical Drive Physical Drive 2 / 36 Requirements There are different classes of requirements: Data Independence application is shielded from physical storage

More information

BridgeWays Management Pack for VMware ESX

BridgeWays Management Pack for VMware ESX Bridgeways White Paper: Management Pack for VMware ESX BridgeWays Management Pack for VMware ESX Ensuring smooth virtual operations while maximizing your ROI. Published: July 2009 For the latest information,

More information

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...

More information

Tableau Server Scalability Explained

Tableau Server Scalability Explained Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted

More information

PERFORMANCE TUNING ORACLE RAC ON LINUX

PERFORMANCE TUNING ORACLE RAC ON LINUX PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database

More information

InfiniteGraph: The Distributed Graph Database

InfiniteGraph: The Distributed Graph Database A Performance and Distributed Performance Benchmark of InfiniteGraph and a Leading Open Source Graph Database Using Synthetic Data Objectivity, Inc. 640 West California Ave. Suite 240 Sunnyvale, CA 94086

More information

Accelerating Server Storage Performance on Lenovo ThinkServer

Accelerating Server Storage Performance on Lenovo ThinkServer Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER

More information

SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011

SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011 SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications Jürgen Primsch, SAP AG July 2011 Why In-Memory? Information at the Speed of Thought Imagine access to business data,

More information

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

WHITE PAPER Guide to 50% Faster VMs No Hardware Required WHITE PAPER Guide to 50% Faster VMs No Hardware Required WP_v03_20140618 Visit us at Condusiv.com GUIDE TO 50% FASTER VMS NO HARDWARE REQUIRED 2 Executive Summary As much as everyone has bought into the

More information

Aspirus Enterprise Backup Assessment and Implementation of Avamar and NetWorker

Aspirus Enterprise Backup Assessment and Implementation of Avamar and NetWorker Aspirus Enterprise Backup Assessment and Implementation of Avamar and NetWorker Written by: Thomas Whalen Server and Storage Infrastructure Team Leader, Aspirus Information Technology Department Executive

More information

RAID Basics Training Guide

RAID Basics Training Guide RAID Basics Training Guide Discover a Higher Level of Performance RAID matters. Rely on Intel RAID. Table of Contents 1. What is RAID? 2. RAID Levels RAID 0 RAID 1 RAID 5 RAID 6 RAID 10 RAID 0+1 RAID 1E

More information

GiantLoop Testing and Certification (GTAC) Lab

GiantLoop Testing and Certification (GTAC) Lab GiantLoop Testing and Certification (GTAC) Lab Benchmark Test Results: VERITAS Foundation Suite and VERITAS Database Edition Prepared For: November 2002 Project Lead: Mike Schwarm, Director, GiantLoop

More information

SAS Application Performance Monitoring for UNIX

SAS Application Performance Monitoring for UNIX Abstract SAS Application Performance Monitoring for UNIX John Hall, Hewlett Packard In many SAS application environments, a strategy for measuring and monitoring system performance is key to maintaining

More information

Binary search tree with SIMD bandwidth optimization using SSE

Binary search tree with SIMD bandwidth optimization using SSE Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous

More information

Initial Hardware Estimation Guidelines. AgilePoint BPMS v5.0 SP1

Initial Hardware Estimation Guidelines. AgilePoint BPMS v5.0 SP1 Initial Hardware Estimation Guidelines Document Revision r5.2.3 November 2011 Contents 2 Contents Preface...3 Disclaimer of Warranty...3 Copyright...3 Trademarks...3 Government Rights Legend...3 Virus-free

More information

Read this before starting!

Read this before starting! Points missed: Student's Name: Total score: /100 points East Tennessee State University Department of Computer and Information Sciences CSCI 4717 Computer Architecture TEST 2 for Fall Semester, 2006 Section

More information

OpenMosix Presented by Dr. Moshe Bar and MAASK [01]

OpenMosix Presented by Dr. Moshe Bar and MAASK [01] OpenMosix Presented by Dr. Moshe Bar and MAASK [01] openmosix is a kernel extension for single-system image clustering. openmosix [24] is a tool for a Unix-like kernel, such as Linux, consisting of adaptive

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE

WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE 1 W W W. F U S I ON I O.COM Table of Contents Table of Contents... 2 Executive Summary... 3 Introduction: In-Memory Meets iomemory... 4 What

More information