about vsphere storage performance.

Size: px
Start display at page:

Download "about vsphere storage performance."

Transcription

1 WHITE PAPER The Top 7 Things you Must Know about vsphere Storage Performance BY ERIC SIEBERT Virtualization is a game of resources where a hypervisor has to perform a delicate balancing act to ensure that virtual machines get the resources they need. On the surface that might sound like a pretty straightforward job, but it s not: Virtualization creates a hostile environment where virtual machine are all simultaneously demanding enough resources to satisfy the needs of hungry application workloads. No resource is more challenged in a virtual environment than storage is. The key to success with virtualization is having storage that will not become an anchor that slows down your whole environment. Storage is both a very wide and very deep topic in vsphere, and there is a lot of information that you need to know to ensure that your storage is capable of taming the I/O blender effect that is typical in virtualized environments. What that in mind this paper will cover seven key things that you must know to gain a better understanding about vsphere storage performance. Must Know #1: Virtual machines require four resources to function: CPU, memory, network and disk. Storage is the one resource that is remote and shared by many hosts so is often the source of negative performance. The impact that storage has on the vsphere environment as a whole In a virtual environment shared storage often becomes the weakest link in the resource chain due to the fact that it is the only resource that is not local to a host. It s a shared resource,and it s also the slowest resource. To put it more bluntly: Storage in virtualization is like having a relay team full of jackrabbits, except for one member who is a tortoise that slows down the whole team. We all know how that famous fable of the tortoise and the hare ended, with slow and steady winning the race in the end, but when it comes to storage in a virtualized environment fast and furious is what wins the game. Virtual machines require four resources to function: CPU Memory Network Disk A delay in delivering any one of these resource types will have a negative performance impact, and also limit your ability to grow your environment. The challenge with storage is that CPU, memory and network resources are all non-shared resources that are local to a host, storage is the one resource that is remote and is shared by many hosts as illustrated in the figure on the following page. So while you may have plenty of free host resources available, if your storage is constrained, you cannot take advantage of those available host resources and overall performance and VM density will suffer.

2 If you look at how a typical virtual environment is laid out, you have many hosts that all share a single storage array. That storage array becomes a focal point, while the virtual machines run on a host, they reside on the shared storage array. Every host has a dependency on that single shared storage array to be able to run its virtual machines. That shared storage array becomes both a single point of failure and a single point of success. If it struggles and becomes overwhelmed by the workloads from all your hosts it can be deemed as a failure. If it becomes the rock star in your environment and doesn t hold your hosts back it can be deemed a success. The key point to remember is that the relationship between your hosts and the storage array is mostly a one way relationship, with your hosts being the ones that are dependent on the storage array and not the other way around. If a single host is struggling to keep up with the workloads from VMs running on it, only that host is impacted. However if your storage array struggles, every host in your virtual environment is going to feel it. With the storage array being a shared, non-local, and slow resource, how well it performs is critical to how well your applications will perform. The resource that is the slowest will dictate the overall performance of a virtual machine. Must Know #2: Storage protocol is the way storage I/O is transported between a server and a storage array. The two main types of protocol are block and file. Block protocols (iscsi, Fibre Channel) package disks into LUNs that are presented to a host as volumes. File protocols (the most common is NFS) provide file-based data storage services and communicate through a client on the host. Understanding the differences between storage protocols At a basic a level the role of a storage protocol in the storage stack is to simply transport storage I/O from a server/host to a shared storage array. You can think of a storage protocol as a vehicle that is loaded up and then driven from a host across a highway to the destination storage array as illustrated in the figure on the next page. This journey typically begins at the HBA or NIC in a host and continues across a network or fabric traveling through switches before it ends at the controller in a storage 2

3 array. Different storage protocols like iscsi, NFS, Fibre Channel & FCoE are simply different types of vehicles that take that journey and drive your storage I/O back and forth. There are differences in how each protocol operates, but each protocol is a simply a different means to the same end result. Storage protocols are split into two categories, file and block. Let s take a look at the differences. At a basic level, block protocols are essentially like having a remote hard disk that is accessed via a special protocol such as Fibre Channel or iscsi. Block storage packages multiple disks into LUNs that are assigned LUN numbers and are then presented as a raw, unformatted storage volume to a host. With block storage the host formats the LUNs with a disk operating system (i.e. VMFS) and directly manages all file activities that occur such as read, writes, and file locking. Block protocols also allow SCSI commands to be sent directly from the host to the storage device for any disk operation. While iscsi and Fibre Channel are both block protocols they distinctly differ in their implementation with Fibre Channel using a proprietary transport method that requires specialized cables, HBAs and switches that communicate over a fabric. iscsi on the other hand uses TCP/IP as its transport method with standard Ethernet networking components. iscsi tends to be easier to implement and is more budget friendly but Fibre Channel can provide the best possible performance and scalability. The most common file protocol used is NFS, which is similar to iscsi as it to uses TCP/IP and standard Ethernet networking components. But the similarities end there as NFS provides both storage and a file system that is designed to provide file-based data storage services. NFS-based storage arrays are commonly referred to as NAS devices. A host cannot send SCSI commands directly to a NAS device and requires a special NFS software client running on the host to communicate with it. Disk is provisioned to hosts via shares that map to folders on the NAS device and data is written and read into variable length files. Because of this most NAS devices are thin provisioned by default as space isn t allocated up front and files only contain data that has been written to disk. NFS storage is typically the easiest protocol to implement and is just as budget friendly as iscsi is. Another factor with storage protocols to consider is the bandwidth, which is typically 1Gbps or 10Gbps for Ethernet based protocols like iscsi & NFS, and 4Gbps, 8Gbps or 16Gbps for Fibre Channel. When it comes to bandwidth you may think that a greater Gbps number is faster, but that s a common misconception. Gbps is not actually the speed that data travels. No matter what the protocol or bandwidth, data travels at the same speed which is nearly the speed of light. The larger Gbps number is about the amount of data (throughput) that can travel on the protocol used, not the speed that it travels at. To illustrate this think of a highway between the host and storage array, the speed limit will always be the same on the highway, but when we add more lanes to the highway our bandwidth is increased and more cars can travel at the speed limit without encountering congestion. 3

4 Choosing a storage protocol for a virtual environment is often made based on costs, personal comfort levels, past experience, and existing infrastructure. All protocols have different characteristics, strengths, and weaknesses but they all work equally well with vsphere. Just remember no matter what storage protocol you do choose your data is going to make the trip at the same speed, what is most important is to ensure you have sufficient bandwidth to meet your VM workload requirements. Must Know #3: Storage architecture is the design, components, and configuration of your storage array from the point where data enters the array to the point where data is read or written within the array. This includes physical components like storage interfaces & controllers, cache, CPU & memory, drive type & speed, as well as configurations such as RAID levels, QoS settings, auto-tiering and more. Why storage architecture matters more than storage protocol When implementing a storage infrastructure to support a virtual environment, the biggest decisions to make are: Which storage protocol to implement Which storage array to use How to architect and configure that storage array These are all important and impactful decisions but none is more important than having the proper storage architecture that can keep up with your virtual machine workloads. Storage architecture in the context of this paper is defined as the design, components, and configuration of your storage array. This is essentially from the point where data enters the array to the point where data is read or written within the array, and everything in between. This includes physical components like storage interfaces & controllers, cache, CPU & memory, drive type & speed, as well as configurations such as RAID levels, QoS settings, auto-tiering and more. So while it is important that you have sufficient bandwidth between your hosts and storage array, this typically is not where bottlenecks form. When you have many hosts all reading and writing I/O to a storage array, the slowdown usually occurs within the storage array itself. The first point where data can slow down is at the storage array interface, you may have 12 hosts that each have multiple 1Gbps or 10Gbps interfaces, but your storage array has many fewer interfaces to handle the incoming data. Think of a 24 lane highway merging down to 4 lanes, the busier that highway is the more your data will slow down when it gets to that merge point. From there the data has to go through I/O controllers so it can get to its final destination, which is the drives in the array. Here you have another potential choke point as the drives are the slowest part of the whole journey. Traditional hard drives are mechanical and drive heads must be positioned in the right spots above spinning platters to read and write to specific locations on the drive, the average time for this to complete is 9-12ms. While SSD drives are much faster (<1 ms) as they can directly access any disk location, they are still not as fast as other host resources such as RAM that can be accessed in nanoseconds. The number, type, and speed of the drives that you use will be the biggest influencer on how much performance your storage array is capable of. Think of drives as the engine in a car, using different drive types and having more drives is like increasing the horsepower of your engine. Traditional drives are capable of a fixed amount of IOPS that are based on its mechanical characteristics such as rotational speed and seek times. That IOPS capability per drive ranges from around IOPS for 4

5 7200 rpm drives up to IOPS for 15,000 rpm drives. SSDs, which are like adding a supercharger to your engine, are capable of much higher IOPS which varies from 5,000 to 75,000 IOPS based on factors such as their interface, controller, memory type, and cache. The sum of each drives IOPS capability will be the factor in the total IOPS that your storage array can support. If you had a storage array with 24 15,000 rpm SAS drives, it would be capable of supporting 4,200 to 5,040 IOPS. Another storage architecture aspect that influences performance is RAID levels, which cause one or more additional writes to occur, as data must be written across multiple drives to protect against data loss from a drive failure. For RAID level 1 (mirroring) the write penalty is 2, as you have to write data to both drives that are mirrored. For RAID level 5 that increases to 4, as parity data must be read and written 4 times. With RAID 6 it goes even higher to 6. In addition, if your storage array does synchronous replication with another storage array, that could potentially slow it down as well because the other array must acknowledge that all writes occurred. The net result is that the more you protect data, the longer your disk writes will take as data has to be written in multiple locations before a write is acknowledged as being complete. All these factors, combined with numerous other design and configuration decisions, dictate how well your storage architecture is able to keep up with the I/O demands of your virtualized workloads. A storage array can become easily overwhelmed during periods of peak usage as bottlenecks form and all your VM workloads slow down as a result. Having the proper storage architecture is crucial to mitigate this, and to prevent your storage array from holding your virtual environment back and becoming the weakest link. That s why it s important that you understand the many factors that will influence the ability of your storage array to handle your VM workloads, so you can make the proper storage architecture decisions when designing storage for your virtual environment. It will take the right combination of storage architecture to ensure that your storage serves as a strong foundation for your virtual environment. Must Know #4: In addition to protocol and architecture, storage performance is also affected by array features and configuration, workload types, multi-pathing, vsphere settings, and caching or buffering technologies. Understanding all the factors that influence overall storage performance Your storage array architecture and protocol are big factors that influence your overall storage performance but there are additional factors that also contribute including: Using certain software features within your array Your array configuration Workload types Multi-pathing vsphere settings I/O caching or buffering technologies. Some array software features can help improve performance while others can degrade it. Features like auto-tiering help improve performance by ensuring that frequently accessed data resides on the fastest tiers of storage. Other array features like deduplication, replication, and encryption have additional overhead that can slow the array down as you trade raw performance for more functionality. 5

6 Using certain vsphere features can also impact performance in a good or bad way. Using vsphere features like Storage Profiles, Storage DRS, and Storage I/O Control can help balance and more tightly control storage resources. Other vsphere features and operations like thin provisioning, VM snapshots, Storage vmotion, VM cloning, and more can impact performance by stealing resources away from VM workloads. So it is important to be aware of both the positive and negative effects that these can have on your overall storage performance. The type of workload generated by the applications in your VMs has a big impact on storage performance. If you have mostly predictable and steady workloads without any huge spikes, your storage array will be able to accommodate them easier. If you have a lot of unpredictable and wild I/O patterns, with frequent peaks or I/O storms, your storage array will be challenged to keep up with them and at some point will become overwhelmed. Your read-to-write ratios, and if the reads/writes are random or sequential, have a big impact on performance. A storage array can handle reads much faster than writes, and sequential data can be read and written much faster than random data can. As a result you should understand your workload characteristics and design your storage architecture with them in mind. One configuration decision that requires some careful planning is sizing LUNs for the placement of virtual machines. When you create a LUN you are grouping a number of drives together to present as a single volume to a host. Creating multiple LUNs essentially creates pools of capacity and performance (IOPS) that cannot be shared between LUNs. Having one LUN whose IOPS capacity is being maxed out by VMs and another LUN that is only at half capacity is inefficient and creates additional bottlenecks. You must plan to ensure that LUNs are not sized too small to meet the workload needs of the VMs residing on them and that they are also balanced so you do not have a lopsided performance vs. capacity ratio. Finally, caching or buffering is another storage architecture decision that can have a big impact on performance. There are many different ways this can be implemented using SSDs, RAM, or PCIe devices. And caching or buffering can be implemented on the host-side, storage-side, or even inline. Caching can help by delivering I/O quicker by not having to go all the way to the disk to get it. Buffering can help speed up write operations by acknowledging the write faster so the host doesn t have to wait for the operation to complete on the storage array. Must Know #5: Latency is the time one component sits idle waiting for another component to complete a task. There are many sources of latency as I/O journeys from VM to disk. Storage latency translates into slow performance. How latency is the silent killer of storage performance Latency in general is defined as the amount time that one component in a system is sitting idle as it waits for another component to complete a task. Latency as it relates to storage translates into slow performance: the higher latency is, the slower performance will be. Latency is measured in different units of times based on how fast a component is: For CPU & memory components latency is measured in nanoseconds which is equal to a billionth of a second For storage, which is a slower resource, latency is measured in milliseconds which is equal to a thousandth of a second. 6

7 A millisecond isn t really noticeable by humans who are used to dealing with larger units of time such as seconds and minutes, but to a computer a millisecond is a long period of time. Storage I/O has a very long journey to make that starts at a VM and ends at the final destination on a disk. Storage I/O starts inside the VM, travels through the virtual SCSI controller, into the hypervisor where it goes through modules and stands in line in a queue with all the I/O from other VMs. From there it exits the host out a NIC or HBA, down a cable, through a switch, down another cable into an array controller, through a CPU, through a cache to a drive head, and finally to a disk platter or cell. That s no simple journey, as shown in the figure below, and one that could have any number of problems or slow-downs. If you put that journey into a real-life perspective it s like flying from one state to another having to take a taxi to the airport, waiting in security lines, taking a train to the gate, boarding a plane, departing and landing, another train and taxi until you get to your destination. So now you have a glimpse into what a storage I/O has to go through to complete its journey, let s take a look at what can cause that journey to slow down. There is no such thing as good latency, however some latency is normal and cannot be avoided. As latency increases, performance decreases and available storage resources will dictate overall VM performance. Latency is like accelerating in your car while pressing down on the brake pedal, you still move forward but at a slower rate. Latency isn t always easily noticeable. Low to medium latency is probably not noticed by your users. However when latency starts to creep higher, your VMs become very sluggish and it will become readily apparent. As a result it is best to always monitor latency and investigate it when it starts to rise with the end result being to keep latency as low as possible. There are many potential causes of latency. The simple distance that I/O has to travel to get from a host to a storage array is going to have some inherent latency, as the farther anything travels the longer it takes to get there. Inside the storage array is where most latency occurs. This in known as device latency. Within the storage array the single biggest contributor to latency is typically the disks. Traditional spinning disks are going to have latency no matter what. It takes time for platters to spin (rotational latency) and for heads to be positioned over the proper spot (seek time) to read and write data. With SSDs that is greatly reduced, but you still have some latency. 7

8 Some additional causes of latency include: RAID levels Host (VMkernel) queue full High host CPU usage Bandwidth congestion LUNs with not enough disks Array sized too small Synchronous replication Too few controllers Too much random I/O Backup traffic Host I/O offloading (VAAI) VM operations (vmotion, snapshots, etc) In vsphere Total Guest Latency (GAVG) is the end-to-end latency measurement from where I/O enters the VMkernel from a VM, to the point it arrives as the storage device. The Total Guest Latency is the sum of Device Latency (DAVG) and Kernel Latency (KAVG). Device Latency is the time I/O takes to leave the host NIC or HBA, get to the storage array and return. Kernel Latency is the time that I/O spends within the VMkernel (ESXi hypervisor), Queue Latency (QAVG) is a sub-stat of Kernel Latency which is the amount of time that I/O spends in the HBA driver. These statistics are illustrated in the figure below and can be monitored using the vcenter Server Performance tabs or using the Command Line Utility (CLI) esxtop which provides real-time continuous monitoring and is great for troubleshooting performance issues. 8

9 So how much latency is bad for your vsphere environment? Having some latency cannot be avoided and you ll never have zero Total Guest Latency. How much latency you have will vary based on many factors including storage protocol and architecture: In general Total Guest Latency should be under 20ms, most of that latency will consist of Device Latency. Once you get above 20ms climbing up to 50ms you will have significant VM slowdowns. Your Queue & Kernel Latency should be as close to zero as possible, I/O should be zipping through your hypervisor, if it s spending longer than 1ms in the VMkernel that s considered high. High Kernel or Queue Latency can be the result of a host s queue depth set too small and it keeps filling up or the host is really busy. High Device Latency indicates a problem with the storage array, either it s simply too busy to keep up or it s improperly architected. Preventing and fixing latency caused by the storage array is fairly simple: Add more spindles or faster drives to it or get a bigger array to increase the IOPS capacity. That may not always be an option though due to lack of space, budgets, or other reasons. Fortunately there are alternative methods that can help solve latency problems. I/O acceleration technologies that reside either within a server, in-line to a storage array, or within a storage array can help by using caching and buffering to help reduce the stress on the storage array. Whatever you choose to do just remember that latency is inevitable, but by taking the right steps you can ensure that the impact is minimal. Knowing your VM workloads and understanding your storage array limitations is the key to taming latency. Must Know #6: There is no way you can accurately size a storage array unless you understand your VM workloads. In addition to IOPS and latency, you must also consider throughput. vcenter Server, vcenter Operations Manager, and esxtop are all tools that can help you monitor your VM workloads. Understanding VM workloads & how to measure and monitor them Perhaps the most important thing that you should know about vsphere storage performance is to know and understand your VM workloads. There is no way you can size a storage array accurately unless you crunch the numbers to understand what your requirements are. This is one area where you shouldn t make assumptions or guess, or you risk wasting time, resources, and a lot of money trying to fix it afterward. You need to understand your VM workload numbers, characteristics and trends so you can implement a storage architecture that can handle it without faltering. This also includes any physical servers that you plan on virtualizing. There are a number of tools from both VMware and third-party vendors that will help you measure virtual and physical workloads so you can take that data and use it when architecting and sizing a storage array to support your virtual environment. The two important measurements when it comes to storage performance are IOPS and Latency which we covered in detail earlier. IOPS is important as it s the measuring stick for how big your VM workloads are, and how many resources they will consume on your storage array. Latency does not directly apply to sizing VM workloads but it s an important measurement because it will tell you how long your VMs are waiting to read and write data to and from the storage array. One additional measurement that will provide a different perspective on I/O is throughput which is typically measured in MB/s. Where IOPS can tell you how tall your VM workloads are, throughput can tell you how wide they are so you can see the complete picture and get a better understanding of where you may have potential bottlenecks forming. 9

10 There are a number of tools that you can use to monitor storage performance in your vsphere environment. vcenter Server provides some good basic reporting of storage metrics that can get you started. By selecting different objects (such as host, VM, datastore, cluster, etc) in vcenter you can view aggregate statistics, or statistics that are unique to that object. You can also change the reporting level from the default level 1 (which shows minimal statistics) all the way up to level 4 so you can see many more statistics, this can be especially useful for troubleshooting. For even better performance monitoring, VMware s vcenter Operations Manager can show much more detailed storage performance information. vcenter Operations Manager uses vendor-specific storage array plug-ins that provide more in-depth information using custom dashboards developed by storage array vendors. There are also many great third-party storage monitoring tools your can use to ensure you have a good monitoring solution in place for your critical storage resources. Don t forget about esxtop while it is a command line tool, it s invaluable for getting a good real-time view of key storage performance metrics. Monitoring storage is a full-time job, but that doesn t mean that you personally have to sit there and do it 24x7. You should have a monitoring solution in place that can do it for you. You need to constantly monitor storage to capture historical data and trends that can be used for both troubleshooting and capacity planning. It s important to know what is considered normal storage performance so you can more easily spot abnormal performance that may need to be investigated. Constantly monitoring performance is also important so you can correlate performance patterns to specific changes or events that may occur in your environment. Understanding your VM workloads and keeping a close eye on storage performance enables you to make more informed decisions and lets you get a deeper insight in to knowing if your storage is healthy or not. Must Know #7: I/O acceleration technologies helps improve storage performance, allows storage to handle peak I/O periods, and reduce the number of I/O requests to the array. Using storage I/O acceleration technologies to help improve performance Storage I/O acceleration technologies play a strategic role in a storage architecture by enabling storage I/O to be delivered more quickly and more efficiently than a storage array can deliver on its own. The benefits of this are twofold: It allows you to survive peak I/O periods with minimal performance impact It reduces the number of I/O requests to the storage array. The challenge with sizing a storage array for virtualization is to be able to handle periods when peak I/O may occur and slow down your entire virtual environment. Your storage array may be able to handle your workloads just fine 95% of the time, but what are you supposed to do when those infrequent peak periods hit? If you oversize your storage array to handle those peaks, it could be both costly and wasteful as the rest of the time you wouldn t be using your storage array very efficiently. 10

11 Storage I/O accelerators can help by bringing data closer to the host and by using faster storage media such as flash memory and SSDs to serve up data quicker. Storage I/O accelerators can be implemented in different locations in the storage stack such as: On the server side using PCIe boards, SSDs or RAM On the storage side using caching, SSDs or auto-tiering As an inline solution between the server and the storage array These solutions typically provide read caching so that data can be read much quicker. Some also provide write buffering that allows writes to be acknowledged faster without waiting for it to be written and acknowledged by the storage array. No matter what method is used, the goal is the same, to improve storage performance. The additional benefit that storage I/O accelerators provide is by offloading I/O reads that the storage array would normally have to do. Think of this as like having an assistant to help you do part of your work at the office, freeing you up to handle additional work. This has the same net effect on a storage array. By letting an I/O accelerator handle some of your VM workloads you are freeing up your storage array which provides you with more usable IOPS and helps to reduce latency. Server-side storage I/O accelerators also have a distinct advantage as the data is much closer to the host and can be accessed much faster than having to go all the way out to the storage array for it. Infinio Accelerator is a server-side I/O acceleration solution. It creates a read cache on the host that works as an offload engine by becoming a storage proxy that reduces the I/O traffic load on your shared storage array. Infinio Accelerator has an advantage over more traditional storage I/O acceleration solutions: it s a software-only solution that leverages existing host RAM for storage I/O caching. So no additional hardware is needed. Using RAM for caching has an added benefit: Data can be read more quickly because RAM can be accessed in nanoseconds, compared to the milliseconds that are typical of SSDs and traditional hard disks. Another big advantage to this solution is that it be deployed quickly and easily without any disruption to your virtual environment. Infinio Accelerator is deployed as a virtual appliance on each host, and is transparent to both the host and shared storage device. The result is that each host contributes a small amount of RAM to the cache pool to create a larger logical cache that is shared across all accelerated hosts. By deduplicating the contents of the shared cache, the cache size is effectively made even larger. Making storage architecture decisions and figuring out exactly how much hardware you need to fix your storage performance problems is always challenging. Infinio Accelerator eliminates the challenge of figuring out how much physical hardware you need to solve your storage performance problem by not requiring any hardware at all. With Infinio Accelerator you can quickly and easily boost your storage performance and avoid unnecessary expense and wasted time. The impact of a storage I/O acceleration solution like Infinio Accelerator on storage performance can be dramatic. With Infinio Accelerator you can get immediate results because: It allows for both more available IOPS capacity on your shared storage array from the cache offload which can also reduce latency It improves performance as read requests to local host memory (cache) are much quicker than to the remote storage array The net result is a better overall storage performance for your virtual environment and less worrying about if your storage array is capable of keeping up with your hosts. 11

12 Summary Being successful with virtualization means you also have to be successful with storage. For the storage that supports your virtual environment to be successful, it must to be able to provide consistently good performance with low latency and not falter when peaks may occur. This is certainly no easy task and there are many pitfalls and challenges that you may encounter that will try and keep that from happening. We covered a lot of different topics related to vsphere storage performance that will help you better understand the challenges you face. Being as informed as possible and making smart decisions with your storage architecture will go a long way for helping you stay on the path to success so you can enjoy all the benefits of virtualization, without all the storage-related headaches that usually come with it. About the author Eric Siebert is an IT industry veteran, speaker, author and blogger with more than 25 years of experience who has been focused on virtualization since Siebert has published books including his most recent, Maximum vsphere from Pearson Publishing and has published hundreds of articles and white papers for Tech Target and VMware partners. He also runs and maintains his own VMware information website, vsphere-land. Siebert is a frequent speaker at industry conferences and events including VMworld and has been recognized as a vexpert by VMware each year since the programs inception in

Performance Management in a Virtual Environment. Eric Siebert Author and vexpert. whitepaper

Performance Management in a Virtual Environment. Eric Siebert Author and vexpert. whitepaper Performance Management in a Virtual Environment Eric Siebert Author and vexpert Performance Management in a Virtual Environment Synopsis Performance is defined as the manner in which or the efficiency

More information

Technical White Paper Integration of ETERNUS DX Storage Systems in VMware Environments

Technical White Paper Integration of ETERNUS DX Storage Systems in VMware Environments White Paper Integration of ETERNUS DX Storage Systems in ware Environments Technical White Paper Integration of ETERNUS DX Storage Systems in ware Environments Content The role of storage in virtual server

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

Choosing and Architecting Storage for Your Environment. Lucas Nguyen Technical Alliance Manager Mike DiPetrillo Specialist Systems Engineer

Choosing and Architecting Storage for Your Environment. Lucas Nguyen Technical Alliance Manager Mike DiPetrillo Specialist Systems Engineer Choosing and Architecting Storage for Your Environment Lucas Nguyen Technical Alliance Manager Mike DiPetrillo Specialist Systems Engineer Agenda VMware Storage Options Fibre Channel NAS iscsi DAS Architecture

More information

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark.

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark. IOmark- VDI HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark, and IOmark

More information

How A V3 Appliance Employs Superior VDI Architecture to Reduce Latency and Increase Performance

How A V3 Appliance Employs Superior VDI Architecture to Reduce Latency and Increase Performance How A V3 Appliance Employs Superior VDI Architecture to Reduce Latency and Increase Performance www. ipro-com.com/i t Contents Overview...3 Introduction...3 Understanding Latency...3 Network Latency...3

More information

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1 What s New in VMware vsphere 4.1 Storage VMware vsphere 4.1 W H I T E P A P E R Introduction VMware vsphere 4.1 brings many new capabilities to further extend the benefits of vsphere 4.0. These new features

More information

The functionality and advantages of a high-availability file server system

The functionality and advantages of a high-availability file server system The functionality and advantages of a high-availability file server system This paper discusses the benefits of deploying a JMR SHARE High-Availability File Server System. Hardware and performance considerations

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity

More information

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

Understanding Data Locality in VMware Virtual SAN

Understanding Data Locality in VMware Virtual SAN Understanding Data Locality in VMware Virtual SAN July 2014 Edition T E C H N I C A L M A R K E T I N G D O C U M E N T A T I O N Table of Contents Introduction... 2 Virtual SAN Design Goals... 3 Data

More information

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash

More information

Why You Need Virtualization-Aware Storage

Why You Need Virtualization-Aware Storage Why You Need Virtualization-Aware Storage By David Davis, VMware vexpert Sponsored by Your virtual infrastructure needs shared storage. You must have it to be able to use advanced virtualization features

More information

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

WHITE PAPER Guide to 50% Faster VMs No Hardware Required WHITE PAPER Guide to 50% Faster VMs No Hardware Required WP_v03_20140618 Visit us at Condusiv.com GUIDE TO 50% FASTER VMS NO HARDWARE REQUIRED 2 Executive Summary As much as everyone has bought into the

More information

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group Russ Fellows, Evaluator Group SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material

More information

Top 7 VMware Management Challenges. Eric Siebert

Top 7 VMware Management Challenges. Eric Siebert Top 7 VMware Management Challenges Eric Siebert Server virtualization introduces much efficiency into the data center but it also introduces challenges as well. Virtualization is both an architectural

More information

Introduction 3. Before VDI and VMware Horizon View 3. Critical Success Factors.. 4

Introduction 3. Before VDI and VMware Horizon View 3. Critical Success Factors.. 4 Table of Contents Introduction 3 Before VDI and VMware Horizon View 3 Critical Success Factors.. 4 Planning for Desktops and Applications 4 Server and Storage Infrastructure.. 4 Solution Architecture Overview

More information

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

WHITE PAPER Guide to 50% Faster VMs No Hardware Required WHITE PAPER Guide to 50% Faster VMs No Hardware Required Think Faster. Visit us at Condusiv.com GUIDE TO 50% FASTER VMS NO HARDWARE REQUIRED 2 Executive Summary As much as everyone has bought into the

More information

How To Fix A Fault Fault Fault Management In A Vsphere 5 Vsphe5 Vsphee5 V2.5.5 (Vmfs) Vspheron 5 (Vsphere5) (Vmf5) V

How To Fix A Fault Fault Fault Management In A Vsphere 5 Vsphe5 Vsphee5 V2.5.5 (Vmfs) Vspheron 5 (Vsphere5) (Vmf5) V VMware Storage Best Practices Patrick Carmichael Escalation Engineer, Global Support Services. 2011 VMware Inc. All rights reserved Theme Just because you COULD, doesn t mean you SHOULD. Lessons learned

More information

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Deploying Microsoft Exchange Server 2010 in a virtualized environment that leverages VMware virtualization and NetApp unified storage

More information

Virtual server management: Top tips on managing storage in virtual server environments

Virtual server management: Top tips on managing storage in virtual server environments Tutorial Virtual server management: Top tips on managing storage in virtual server environments Sponsored By: Top five tips for managing storage in a virtual server environment By Eric Siebert, Contributor

More information

WHITE PAPER The Storage Holy Grail: Decoupling Performance from Capacity

WHITE PAPER The Storage Holy Grail: Decoupling Performance from Capacity WHITE PAPER The Storage Holy Grail: Decoupling Performance from Capacity Technical White Paper 1 The Role of a Flash Hypervisor in Today s Virtual Data Center Virtualization has been the biggest trend

More information

Storage I/O Control Technical Overview and Considerations for Deployment. VMware vsphere 4.1

Storage I/O Control Technical Overview and Considerations for Deployment. VMware vsphere 4.1 Storage I/O Control Technical Overview and Considerations for Deployment VMware vsphere 4.1 T E C H N I C A L W H I T E P A P E R Executive Summary Storage I/O Control (SIOC) provides storage I/O performance

More information

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55% openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive

More information

FUSION iocontrol HYBRID STORAGE ARCHITECTURE 1 WWW.FUSIONIO.COM

FUSION iocontrol HYBRID STORAGE ARCHITECTURE 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM FUSION iocontrol HYBRID STORAGE ARCHITECTURE Contents Contents... 2 1 The Storage I/O and Management Gap... 3 2 Closing the Gap with Fusion-io... 4 2.1 Flash storage, the Right Way...

More information

Comparison of Hybrid Flash Storage System Performance

Comparison of Hybrid Flash Storage System Performance Test Validation Comparison of Hybrid Flash Storage System Performance Author: Russ Fellows March 23, 2015 Enabling you to make the best technology decisions 2015 Evaluator Group, Inc. All rights reserved.

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark

More information

EMC XTREMIO EXECUTIVE OVERVIEW

EMC XTREMIO EXECUTIVE OVERVIEW EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying

More information

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman. WHITE PAPER All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.nl 1 Monolithic shared storage architectures

More information

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel W h i t e p a p e r Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel Introduction The July 2011 launch of the VMware vsphere 5.0 which included the ESXi 5.0 hypervisor along with vcloud Director

More information

FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures

FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures Technology Insight Paper FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures By Leah Schoeb January 16, 2013 FlashSoft Software from SanDisk: Accelerating Virtual Infrastructures 1 FlashSoft

More information

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Table of Contents Introduction.......................................3 Benefits of VDI.....................................4

More information

Xangati Storage Solution Brief. Optimizing Virtual Infrastructure Storage Systems with Xangati

Xangati Storage Solution Brief. Optimizing Virtual Infrastructure Storage Systems with Xangati Optimizing Virtual Infrastructure Storage Systems with Xangati Virtualized infrastructures are comprised of servers, switches, storage systems and client devices. Of the four, storage systems are the most

More information

A virtual SAN for distributed multi-site environments

A virtual SAN for distributed multi-site environments Data sheet A virtual SAN for distributed multi-site environments What is StorMagic SvSAN? StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical

More information

Optimize VDI with Server-Side Storage Acceleration

Optimize VDI with Server-Side Storage Acceleration WHITE PAPER Optimize VDI with Server-Side Storage Acceleration Eliminate Storage Bottlenecks for Fast, Reliable Virtual Desktop Performance 1 Virtual Desktop Infrastructures (VDI) give users easy access

More information

VMware vsphere Design. 2nd Edition

VMware vsphere Design. 2nd Edition Brochure More information from http://www.researchandmarkets.com/reports/2330623/ VMware vsphere Design. 2nd Edition Description: Achieve the performance, scalability, and ROI your business needs What

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

VMware Software-Defined Storage & Virtual SAN 5.5.1

VMware Software-Defined Storage & Virtual SAN 5.5.1 VMware Software-Defined Storage & Virtual SAN 5.5.1 Peter Keilty Sr. Systems Engineer Software Defined Storage pkeilty@vmware.com @keiltypeter Grant Challenger Area Sales Manager East Software Defined

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the

More information

Configuration Maximums

Configuration Maximums Topic Configuration s VMware vsphere 5.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.0. The limits presented in the

More information

Frequently Asked Questions: EMC UnityVSA

Frequently Asked Questions: EMC UnityVSA Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the

More information

The next step in Software-Defined Storage with Virtual SAN

The next step in Software-Defined Storage with Virtual SAN The next step in Software-Defined Storage with Virtual SAN VMware vforum, 2014 Lee Dilworth, principal SE @leedilworth 2014 VMware Inc. All rights reserved. The Software-Defined Data Center Expand virtual

More information

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software Best Practices for Monitoring Databases on VMware Dean Richards Senior DBA, Confio Software 1 Who Am I? 20+ Years in Oracle & SQL Server DBA and Developer Worked for Oracle Consulting Specialize in Performance

More information

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This

More information

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL Dr. Allon Cohen Eli Ben Namer info@sanrad.com 1 EXECUTIVE SUMMARY SANRAD VXL provides enterprise class acceleration for virtualized

More information

vsphere Performance Best Practices

vsphere Performance Best Practices INF-VSP1800 vsphere Performance Best Practices Peter Boone, VMware, Inc. #vmworldinf Disclaimer This session may contain product features that are currently under development. This session/overview of

More information

Configuration Maximums VMware Infrastructure 3

Configuration Maximums VMware Infrastructure 3 Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

Solving the Five Most Common VMware Virtual Machine Issues. By David Davis, vexpert Co-Founder, ActualTech Media January, 2015

Solving the Five Most Common VMware Virtual Machine Issues. By David Davis, vexpert Co-Founder, ActualTech Media January, 2015 Solving the Five Most Common VMware Virtual Machine Issues By David Davis, vexpert Co-Founder, ActualTech Media January, 2015 Introduction Based on the analysis of several million virtual machines by opvizor,

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

What s New: vsphere Virtual Volumes

What s New: vsphere Virtual Volumes Virtual Volumes (VVols) Beta What s New What s New: vsphere Virtual Volumes VMware Storage Business Unit Documentation v 1.5/August 2015 TECHNICAL MARKETING DOCUMENTATION / 1 Contents INTRODUCTION... 3

More information

Deploying and Optimizing SQL Server for Virtual Machines

Deploying and Optimizing SQL Server for Virtual Machines Deploying and Optimizing SQL Server for Virtual Machines Deploying and Optimizing SQL Server for Virtual Machines Much has been written over the years regarding best practices for deploying Microsoft SQL

More information

Performance Analysis Methods ESX Server 3

Performance Analysis Methods ESX Server 3 Technical Note Performance Analysis Methods ESX Server 3 The wide deployment of VMware Infrastructure 3 in today s enterprise environments has introduced a need for methods of optimizing the infrastructure

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

VDI Solutions - Advantages of Virtual Desktop Infrastructure

VDI Solutions - Advantages of Virtual Desktop Infrastructure VDI s Fatal Flaw V3 Solves the Latency Bottleneck A V3 Systems White Paper Table of Contents Executive Summary... 2 Section 1: Traditional VDI vs. V3 Systems VDI... 3 1a) Components of a Traditional VDI

More information

Analysis of VDI Storage Performance During Bootstorm

Analysis of VDI Storage Performance During Bootstorm Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process

More information

VMware vstorage Virtual Machine File System. Technical Overview and Best Practices

VMware vstorage Virtual Machine File System. Technical Overview and Best Practices VMware vstorage Virtual Machine File System Technical Overview and Best Practices A V M wa r e T e c h n i c a l W h i t e P a p e r U p d at e d f o r V M wa r e v S p h e r e 4 V e r s i o n 2. 0 Contents

More information

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the

More information

A Guide to Modern StorAGe ArchitectureS

A Guide to Modern StorAGe ArchitectureS A Guide to Modern StorAGe ArchitectureS www.infinio.com contact@infinio.com 617-374-6500 Traditional Storage Architectures 2 Modern Complexities: 3 Virtualization, Scale Out, and Cloud Hybrid Arrays 9

More information

SQL Server Virtualization

SQL Server Virtualization The Essential Guide to SQL Server Virtualization S p o n s o r e d b y Virtualization in the Enterprise Today most organizations understand the importance of implementing virtualization. Virtualization

More information

Best practices for Implementing Lotus Domino in a Storage Area Network (SAN) Environment

Best practices for Implementing Lotus Domino in a Storage Area Network (SAN) Environment Best practices for Implementing Lotus Domino in a Storage Area Network (SAN) Environment With the implementation of storage area networks (SAN) becoming more of a standard configuration, this paper describes

More information

Dionseq Uatummy Odolorem Vel

Dionseq Uatummy Odolorem Vel W H I T E P A P E R Aciduisismodo The 3D Scaling Dolore Advantage Eolore Dionseq Uatummy Odolorem Vel How 3D Scaling Drives Ultimate Storage Efficiency By Hitachi Data Systems September 2010 Hitachi Data

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

All-Flash Arrays: Not Just for the Top Tier Anymore

All-Flash Arrays: Not Just for the Top Tier Anymore All-Flash Arrays: Not Just for the Top Tier Anymore Falling prices, new technology make allflash arrays a fit for more financial, life sciences and healthcare applications EXECUTIVE SUMMARY Real-time financial

More information

WHITE PAPER. SQL Server License Reduction with PernixData FVP Software

WHITE PAPER. SQL Server License Reduction with PernixData FVP Software WHITE PAPER SQL Server License Reduction with PernixData FVP Software 1 Beyond Database Acceleration Poor storage performance continues to be the largest pain point with enterprise Database Administrators

More information

Latency: The Heartbeat of a Solid State Disk. Levi Norman, Texas Memory Systems

Latency: The Heartbeat of a Solid State Disk. Levi Norman, Texas Memory Systems Latency: The Heartbeat of a Solid State Disk Levi Norman, Texas Memory Systems SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individual members

More information

Hyper ISE. Performance Driven Storage. XIO Storage. January 2013

Hyper ISE. Performance Driven Storage. XIO Storage. January 2013 Hyper ISE Performance Driven Storage January 2013 XIO Storage October 2011 Table of Contents Hyper ISE: Performance-Driven Storage... 3 The Hyper ISE Advantage... 4 CADP: Combining SSD and HDD Technologies...

More information

Everything you need to know about flash storage performance

Everything you need to know about flash storage performance Everything you need to know about flash storage performance The unique characteristics of flash make performance validation testing immensely challenging and critically important; follow these best practices

More information

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009

More information

Balancing CPU, Storage

Balancing CPU, Storage TechTarget Data Center Media E-Guide Server Virtualization: Balancing CPU, Storage and Networking Demands Virtualization initiatives often become a balancing act for data center administrators, who are

More information

The Top 20 VMware Performance Metrics You Should Care About

The Top 20 VMware Performance Metrics You Should Care About The Top 20 VMware Performance Metrics You Should Care About Why you can t ignore them and how they can help you find and avoid problems. WHITEPAPER BY ALEX ROSEMBLAT Table of Contents Introduction... 3

More information

BridgeWays Management Pack for VMware ESX

BridgeWays Management Pack for VMware ESX Bridgeways White Paper: Management Pack for VMware ESX BridgeWays Management Pack for VMware ESX Ensuring smooth virtual operations while maximizing your ROI. Published: July 2009 For the latest information,

More information

Deploying Affordable, High Performance Hybrid Flash Storage for Clustered SQL Server

Deploying Affordable, High Performance Hybrid Flash Storage for Clustered SQL Server Deploying Affordable, High Performance Hybrid Flash Storage for Clustered SQL Server Flash storage adoption has increased in recent years, as organizations have deployed it to support business applications.

More information

Maximum vsphere. Tips, How-Tos,and Best Practices for. Working with VMware vsphere 4. Eric Siebert. Simon Seagrave. Tokyo.

Maximum vsphere. Tips, How-Tos,and Best Practices for. Working with VMware vsphere 4. Eric Siebert. Simon Seagrave. Tokyo. Maximum vsphere Tips, How-Tos,and Best Practices for Working with VMware vsphere 4 Eric Siebert Simon Seagrave PRENTICE HALL Upper Saddle River, NJ Boston Indianapolis San Francisco New York Toronto Montreal

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

How To Use Starwind Iscsi San On A 2008 Server With Iscsisa (Vmware) Veeam Veea Veeami Veeamsi Veeamdroid Veeamm (Vmos) Vveam

How To Use Starwind Iscsi San On A 2008 Server With Iscsisa (Vmware) Veeam Veea Veeami Veeamsi Veeamdroid Veeamm (Vmos) Vveam WhitePaper March 2012 Best practices for backing up virtual servers with Veeam software and StarWind storage While virtualizing your applications increase their efficiency, you still need a backup solution

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Server-Side Virtual Controller Technology (SVCT)

Server-Side Virtual Controller Technology (SVCT) TECHNOLOGY BRIEF Server-Side Virtual Controller Technology (SVCT) All traditional storage suffers from the I/O blender that severely impacts application performance, because traditional storage is still

More information

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN Updated: May 19, 2015 Contents Introduction... 1 Cloud Integration... 1 OpenStack Support... 1 Expanded

More information

Data Center Performance Insurance

Data Center Performance Insurance Data Center Performance Insurance How NFS Caching Guarantees Rapid Response Times During Peak Workloads November 2010 2 Saving Millions By Making It Easier And Faster Every year slow data centers and application

More information

Evaluation of Enterprise Data Protection using SEP Software

Evaluation of Enterprise Data Protection using SEP Software Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &

More information

Nimble Storage for VMware View VDI

Nimble Storage for VMware View VDI BEST PRACTICES GUIDE Nimble Storage for VMware View VDI N I M B L E B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R V M W A R E V I E W V D I 1 Overview Virtualization is an important

More information

THE HYPER-CONVERGENCE EFFECT: DO VIRTUALIZATION MANAGEMENT REQUIREMENTS CHANGE? by Eric Siebert, Author and vexpert

THE HYPER-CONVERGENCE EFFECT: DO VIRTUALIZATION MANAGEMENT REQUIREMENTS CHANGE? by Eric Siebert, Author and vexpert THE HYPER-CONVERGENCE EFFECT: DO VIRTUALIZATION MANAGEMENT REQUIREMENTS CHANGE? by Eric Siebert, Author and vexpert THE HYPER-CONVERGENCE EFFECT: DO VIRTUALIZATION MANAGEMENT REQUIREMENTS CHANGE? There

More information

Configuration Maximums VMware vsphere 4.0

Configuration Maximums VMware vsphere 4.0 Topic Configuration s VMware vsphere 4.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.0. The limits presented in the

More information

HGST Virident Solutions 2.0

HGST Virident Solutions 2.0 Brochure HGST Virident Solutions 2.0 Software Modules HGST Virident Share: Shared access from multiple servers HGST Virident HA: Synchronous replication between servers HGST Virident ClusterCache: Clustered

More information

How To Get A Storage And Data Protection Solution For Virtualization

How To Get A Storage And Data Protection Solution For Virtualization Smart Storage and Modern Data Protection Built for Virtualization Dot Hill Storage Arrays and Veeam Backup & Replication Software offer the winning combination. Veeam and Dot Hill Solutions Introduction

More information

The Power of Deduplication-Enabled Per-VM Data Protection SimpliVity s OmniCube Aligns VM and Data Management

The Power of Deduplication-Enabled Per-VM Data Protection SimpliVity s OmniCube Aligns VM and Data Management The Power of Deduplication-Enabled Per-VM Data Protection SimpliVity s OmniCube Aligns VM and Data Management Prepared for SimpliVity Contents The Bottom Line 1 Introduction 2 Per LUN Problems 2 Can t

More information

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE White Paper Meeting the Five Key Needs of Next-Generation Cloud Computing Networks Cloud computing promises to bring scalable processing capacity to a wide range of applications in a cost-effective manner.

More information

Evaluation Report: Supporting Multiple Workloads with the Lenovo S3200 Storage Array

Evaluation Report: Supporting Multiple Workloads with the Lenovo S3200 Storage Array Evaluation Report: Supporting Multiple Workloads with the Lenovo S3200 Storage Array Evaluation report prepared under contract with Lenovo Executive Summary Virtualization is a key strategy to reduce the

More information

EMC Celerra Unified Storage Platforms

EMC Celerra Unified Storage Platforms EMC Solutions for Microsoft SQL Server EMC Celerra Unified Storage Platforms EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008, 2009 EMC

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until

More information

Pivot3 Reference Architecture for VMware View Version 1.03

Pivot3 Reference Architecture for VMware View Version 1.03 Pivot3 Reference Architecture for VMware View Version 1.03 January 2012 Table of Contents Test and Document History... 2 Test Goals... 3 Reference Architecture Design... 4 Design Overview... 4 The Pivot3

More information