DEPLOYMENT BEST PRACTICE FOR MICROSOFT SQL SERVER WITH VMAX 3 SLO MANAGEMENT

Size: px
Start display at page:

Download "DEPLOYMENT BEST PRACTICE FOR MICROSOFT SQL SERVER WITH VMAX 3 SLO MANAGEMENT"

Transcription

1 DEPLOYMENT BEST PRACTICE FOR MICROSOFT SQL SERVER WITH VMAX 3 SLO MANAGEMENT EMC VMAX Engineering White Paper ABSTRACT With the introduction of the third generation VMAX disk arrays, Microsoft SQL Server administrators have a new way to deploy a wide range of applications in a single high-performance, high capacity, self-tuning storage environment that can dynamically manage each application s performance requirements with minimal effort. January, 2015 EMC WHITE PAPER

2 To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local representative or authorized reseller, visit or explore and compare products in the EMC Store Copyright 2015 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. Part Number H

3 TABLE OF CONTENTS EXECUTIVE SUMMARY... 4 AUDIENCE... 4 VMAX 3 PRODUCT OVERVIEW... 5 VMAX 3 Overview... 5 VMAX 3 and Service Level Objective (SLO) based provisioning... 6 STORAGE DESIGN PRINCIPLES FOR MIROSOFT SQL SERVER ON VMAX 3 10 Storage connectivity considerations Physical Host connectivity considerations Number and size of host devices considerations Virtual Provisioning and thin devices considerations Microsoft SQL Server and data skew Microsoft SQL Server data types and the choice of SLO Host I/O Limits and multi-tenancy Using cascaded storage groups SQL SERVER DATABASE PROVISIONING Storage provisioning tasks with VMAX Provisioning SQL Server database storage with Unisphere Provisioning SQL Server database storage with Solutions Enabler CLI MICROSOFT SQL SERVER SLO MANAGEMENT TEST USE CASES Test Configuration Test Overview Use case 1 Single database run with change in SLO Use case 2 Competing database runs with changein SLO Use case 3 All flash configuration with data files and transaction log CONCLUSION APPENDIX Solutions Enabler CLI commands for SLO management and monitoring

4 EXECUTIVE SUMMARY VMAX 3 family of storage arrays is the next major step in evolving VMAX hardware and software targeted to meet new industry challenges of scale, performance, and availability. At the same time, VMAX 3 has taken a leap in making complex operations of storage management, provisioning, and setting performance goals, simple to execute and manage. VMAX 3 family of storage arrays come pre-configured from factory to simplify deployment at customer sites and minimize time to first I/O. Each array uses Virtual Provisioning to allow the user easy and quick storage provisioning. While VMAX 3 can ship as an all-flash array with the combination of EFD 1 (Enterprise Flash Drives) and large cache that accelerates both writes and reads even farther, it also excels in providing FAST 2 (Fully Automated Storage Tiering) enabled performance management based on service level goals across multiple tiers of storage. VMAX 3 new hardware architecture comes with more CPU power, larger persistent cache, and a new Dynamic Virtual Matrix dual InfiniBand fabric interconnect that creates an extremely fast internal memory-to-memory and data-copy fabric. Many enhancements were introduced to VMAX 3 replication software to support new capabilities such as TimeFinder SnapVX local replication to allow for hundreds of snapshots that can be incrementally refreshed or restored, and can cascade any number of times. Also SRDF remote replication software adds new features and capabilities that provide more robust remote replication support. VMAX 3 adds the ability to connect directly to Data Domain system so database backups can be sent directly from the primary storage to the Data Domain system without having to go through the host first. VMAX 3 also offers embedded file support (enas) via a new hypervisor layer new to VMAX 3, in addition to traditional block storage. This white paper explains the basic VMAX 3 design changes with regard to storage provisioning and performance management, and how they simplify the management of storage and affect Microsoft SQL Server database layout decisions. It explains the new FAST architecture for managing SQL Server databases performance using Service Level Objectives (SLOs) and provides guidelines and best practices for its use. AUDIENCE This white paper is intended for database and system administrators, storage administrators, and system architects who are responsible for implementing, managing, and maintaining SQL Server databases and VMAX 3 storage systems. It is assumed that readers have some familiarity with Microsoft SQL Server and the EMC VMAX 3 family of storage arrays, and are interested in achieving higher database availability, performance, and ease of storage management. 1 Enterprise Flash Drives (EFD) are SSD Flash drives that are designed for high performance and resiliency suited for enterprise applications. 2 Fully Automated Storage Tiering (FAST) allows VMAX 3 storage to automatically and dynamically manage performance service level goals across the available storage resources to meet the application I/O demand, even as new data is added, and access patterns continue to change over time. 4

5 VMAX 3 PRODUCT OVERVIEW VMAX 3 OVERVIEW The EMC VMAX 3 family of storage arrays are built on the strategy of simple, intelligent, modular storage, and incorporates a Dynamic Virtual Matrix interface that connects and shares resources across all VMAX 3 engines, allowing the storage array to seamlessly grow from an entry-level configuration into the world s largest storage array. It provides the highest levels of performance and availability featuring new hardware and software capabilities. The newest additions to the EMC VMAX 3 family, VMAX 100K, 200K and 400K, deliver the latest in Tier-1 scale-out multi-controller architecture with consolidation and efficiency for the enterprise. With enhanced hardware and software, the new VMAX 3 array provides unprecedented performance and scale. It offers dramatic increases in floor tile density (GB/Ft 2 ) with engines and high capacity disk enclosures for both 2.5" and 3.5" drives, consolidated in the same system bay. Figure 1 shows possible VMAX 3 components. Refer to EMC documentation and release notes to find the most up to date supported components. In addition, VMAX 3 arrays can be configured as either hybrid or all- flash arrays. All VMAX 3 models come pre-configured from the factory to significantly shorten the time from installation to first I/O. See sample configuration below: 1 8 redundant VMAX 3 Engines Up to 4 PB usable capacity Up to 256 FC host ports Up to 16 TB global memory (mirrored) Up to 384 Cores, 2.7 GHz Intel Xeon E v2 Up to 5,760 drives SSD Flash drives 200/400/800 GB 2.5 / GB 1.2 TB 10K RPM SAS drives 2.5 / GB 15K RPM SAS drives 2.5 /3.5 2 TB/4 TB SAS 7.2K RPM 3.5 Figure 1 VMAX 3 storage array 3 VMAX 3 engines provide the foundation to the storage array. Each fully redundant engine contains two VMAX 3 directors and redundant interfaces to the new Dynamic Virtual Matrix dual InfiniBand fabric interconnect. Each director consolidates front-end, global memory, and back-end functions, enabling direct memory access to data for optimized I/O operations. Depending on the array chosen, up to eight VMAX 3 engines can be interconnected via a set of active fabrics that provide scalable performance and high availability. New to VMAX 3 design, host ports are no longer mapped directly to CPU resources. CPU resources are allocated as needed using pools (front-end, back-end, or data services pools) of CPU cores which can service all activity in the VMAX 3 array. This shared multi-core architecture reduces I/O path latencies by facilitating system scaling of processing power without requiring additional drives or front-end connectivity. VMAX 3 arrays introduce the industry s first open storage and hypervisor converged operating system, HYPERMAX OS. It combines industry-leading high availability, I/O management, data integrity validation, quality of service, and storage tiering and data security with an open application platform. HYPERMAX OS features a real-time, non-disruptive, storage hypervisor that manages and protects embedded data services (running in virtual machines)by extending VMAX high availability to these data services that traditionally have run external to the array (such as Unisphere). HYPERMAX OS runs on top of the Dynamic Virtual Matrix leveraging its scale out flexibility of cores, cache, and host interfaces. The embedded storage hypervisor reduces external hardware and networking requirements, delivers the highest levels of availability, and dramatically lower latency. 3 Additional drive types and capacities may be available. Contact EMC representative for more details. 5

6 All storage in the VMAX 3 array is virtually provisioned. VMAX Virtual Provisioning enables users to simplify storage management and increase capacity utilization by sharing storage among multiple applications and only allocating storage as needed from a shared pool of physical disks known as Storage Resource Pool (SRP). The array uses the dynamic and intelligent capabilities of Fully Automated Storage Tiering (FAST) to meet specified Service Level Objectives (SLOs) throughout the lifecycle of each application. VMAX 3 SLOs and SLO provisioning are new to the VMAX 3 family, and tightly integrated with EMC FAST software to optimize agility and array performance across all drive types in the system. While VMAX 3 can ship in an all-flash configuration, when purchased with hybrid drive types as a combination of flash and hard drives, EMC FAST technology can improve application performance and at the same time reduce cost by intelligently using a combination of high-performance flash drives with cost-effective high-capacity hard disk drives. For local replication, VMAX 3 adds a new feature to TimeFinder software called SnapVX, which provides support for a greater number of snapshots. Unlike previous VMAX snapshots, SnapVX snapshots do not require the use of dedicated target devices. SnapVX allows for up to 256 snapshots per individual source. These snapshots can copy (referred to as link-copy) their data to new target devices and re-link to update just the incremental data changes of previously linked devices. For remote replication, SRDF adds new capabilities and features to provide protection for SQL Server databases and applications. All user data entering VMAX 3 is T10 DIF protected, including replicated data and data on disks. VMAX 3 AND SERVICE LEVEL OBJECTIVE (SLO) BASED PROVISIONING Introduction to FAST in VMAX 3 With VMAX 3, FAST (Fully Automated Storage Tiering) is enhanced to include both intelligent storage provisioning and performance management, using Service Level Objectives (SLOs). SLOs automate the allocation and distribution of application data to the correct data pool (and therefore storage tier) without manual intervention. Simply choose the SLO (for example, Platinum, Gold, or Silver), that best suites the application requirements. SLOs are tied to expected average I/O latency for both reads and writes and therefore, both the initial provisioning and application on-going performance are automatically measured and managed based on compliance to storage tiers and performance goals. FAST continuously samples the storage activity every 10 minutes and when necessary moves data at FAST s sub-lun granularity, which is 5.25MB (42 extents of 128KB). SLOs can be dynamically changed at any time (be promoted or demoted), and FAST continuously monitors and adjusts data location at the sub-lun granularity across the available storage tiers to match the performance goals provided. All this is done automatically, within the VMAX 3 storage array, without having to deploy complex application ILM 4 strategy or use host resources for migrating data due to performance needs. VMAX 3 FAST Components Figure 2 depicts the elements of FAST that form the basis for SLO based management, as described below. Physical disk group provides grouping of physical storage (Flash, or hard disk drives) based on drive types. All drives in a disk group have the same technology, capacity, form factor, and speed. The disk groups are pre-configured at the factory, based on the specified configuration requirements at the time of purchase. Data Pool is a collection of RAID protected internal devices (also known as TDATs, or thin data devices) that are carved out of a single physical disk group. Each data pool can belong to a single SRP (see definition below), and provides a tier of storage based on its drive technology and RAID protection. Data pools can allocate capacity for host devices or replications. Data pools are also preconfigured at the factory to provide optimal RAID protection and performance. Storage Resource Pool (SRP) is a collection of data pools that provides FAST a domain for capacity and performance management. By default, a single default SRP comes factory pre-configured. Additional SRPs can be created with an EMC service engagement. The data movements performed by FAST are done within the boundaries of the SRP and are covered in detail later in this paper. Storage Group (SG) is a collection of host devices (LUNs) that consume storage capacity from the underlying SRP. Because both FAST and storage provisioning operations are managed at a storage group level, storage groups can be cascaded (hierarchical) to allow different levels of granularity required for each operation (see cascaded storage group section later). 4 Information Lifecycle Management (ILM) refers to a strategy of managing application data based on policies. It usually involves complex data analysis, mapping, and tracking practices. 6

7 Host devices (LUNs) are the components of a storage group. In VMAX 3 all host devices are virtual and at the time of creation can be fully allocated or thin. Virtual means that they are a set of pointers to data in the data pools, allowing FAST to manage the data location across data pools seamlessly. Fully allocated means that the device s full capacity is reserved in the data pools even before the host has access to the device. Thin means that although the host sees the LUN with its full reported capacity, in reality no capacity is allocated from the data pools until explicitly written to by the host. All host devices are natively striped across the data pools where they are allocated with granularity of a single VMAX 3 track size, which is 128KB. Service Level Objectives (SLO) provides a pre-defined set of service levels (such as Platinum, Gold or Silver) that can be supported by the underlying SRP. Each SLO has a specific performance goal and some have also tier 5 -compliance goals from the underlying SRP that FAST will work to satisfy. For example, Bronze SLO will attempt to have no data on EFDs and Platinum SLO will attempt to have no data on 7.2k rpm drives. An SLO defines an expected average response time target for a storage group. By default, all host devices and all storage groups are attached to the Optimized SLO, (which will assure I/Os are serviced from the most appropriate data pool for their workload), but in cases where more deterministic performance goals are needed, specific SLOs can be specified. Figure 2 VMAX 3 architecture and service level provisioning 5 Storage tiers refer to combinations of disk drive and RAID protection that create unique storage service levels. For example Flash (SSD) drives with RAID5 protection, 10k or 15k RPM drives with RAID1 protection, etc. 7

8 Service Level Objectives (SLO) and Workload Types Overview Each storage resource pool (SRP) contains a set of known storage resources, as seen in Figure 2. Based on the available resources in the SRP, HYPERMAX OS will offer a list of available Service Level Objectives (SLOs) that can be met using this particular SRP, as shows in Table 1. This assures SLOs can be met, and that SRPs aren t provisioned beyond their ability to meet application requirements. Note: When certain drive types are not present in the storage, the associated SLOs will not be shown. For example, if no 7.2 RPM drives are available in the system, Bronze SLO will not be offered. Likewise, if EFDs are not present in the SRP, the Diamond SLO will not be offered. Note: Since SLO s are tied to the available drive types, it is important to plan the requirements for a new VMAX 3 system carefully. EMC works with customers using a new and easy to use Sizer tool to assist with this task. Table 1 Service Level Objectives SLO Minimum required drive combinations to list SLO Performance expectation Diamond EFD Emulating EFD performance Platinum EFD and (15K or 10K) Emulating performance between 15K drive and EFD Gold EFD and (15K or 10K or 7.2K) Emulating 15K drive performance Silver EFD and (15K or 10K or 7.2K) Emulating 10K drive performance Bronze 7.2K and (15K or 10K) Emulating 7.2K drive performance Optimized Any System optimized performance By default, no specific SLO needs to be selected as by default all data in the VMAX 3 storage array receives Optimized SLO. System Optimized SLO meets the performance and compliance requirements by dynamically placing the most active data in the highest performing tier and less active data in low performance high capacity tiers. Note: Optimized SLO offers an optimal balance of resources and performance across the whole SRP, based on I/O load, type of I/Os, data pool utilization, and available capacities in the pools. It will place the most active data on higher performing storage and least active data on the most cost-effective storage. If data pools capacity or utilization is stressed it will attempt to alleviate it by using other pools. However, when specific storage groups (database LUNs) require a more deterministic SLO, one of the other available SLOs can be selected. For example, a storage group holding critical SQL Server data files can receive a Diamond SLO while the logs can be put on Platinum. A less critical application can be fully contained in Gold or Silver SLO. Refer also to MICROSOFT SQL SERVER DATA TYPES AND THE CHOICE OF SLO section later in the paper. Once an SLO is selected (other than Optimized), it can be further qualified by a Workload type: OLTP or DSS, where OLTP workload is focused on optimizing performance for small block I/O and DSS workload is focused on optimizing performance for large block I/O. The Workload Type can also specify whether to account for any overhead associated with replication (local or remote). The workload type qualifiers for replication overhead are OLTP_Rep and DSS_Rep, where Rep denotes replicated. 8

9 Understanding SLO Definitions and Workload Types Each SLO is effectively a reference to an expected response-time range (minimum and maximum allowed latencies) for host I/Os, where a particular Expected Average Response Time is attached to each SLO and workload combination. The Solutions Enabler CLI or Unisphere for VMAX can list the available service levels and workload combinations, as seen in Figure 3 (see command line syntax example to list available SLOs in the Appendix). They only list the expected average latency, not the range of values. Without a workload type, the latency range is the widest for its SLO type. When a workload type is added, the range is reduced, due to the added information. When Optimized is selected (which is also the default SLO for all storage groups, unless the user assigns another), the latency range is in fact the full latency spread that the SRP can satisfy, based on its known and available components. Figure 3 Unisphere shows available SLOs Important SLO considerations: Because an SLO references a range of target host I/O latencies, the smaller the spread the more predictable is the result. It is therefore recommended to select both an SLO as well as a workload type. For example: Platinum SLO with OLTP workload and no replications. Because an SLO references an Expected Average Response Time, it is possible for two applications executing a similar workload and set with the same SLO to perform slightly different. This can happen if the host I/O latency still falls within the allowed range. For that reason it is recommended to use a workload type together with an SLO when a smaller range of latencies is desirable. Note: SLOs can be easily changed using Solutions Enabler or Unisphere for VMAX online. Also, when necessary to add additional layers of SLOs, the Storage Group (SG) can be easily changed into cascaded SG so each child or the parent can receive its appropriate SLO. 9

10 STORAGE DESIGN PRINCIPLES FOR MIROSOFT SQL SERVER ON VMAX 3 VMAX 3 storage provisioning has become much simpler than previous releases. Since VMAX 3 physical disk groups, data pools, and even the default SRP come pre-configured from factory based on inputs to the Sizer tool that helps size them correctly, the only thing required is configure connectivity between your hosts and the VMAX 3 and then start provisioning host devices. The following sections discuss the principles and considerations for storage connectivity and provisioning for Microsoft SQL Server. STORAGE CONNECTIVITY CONSIDERATIONS When planning storage connectivity for performance and availability it is recommended to go-wide before going deep, which means it is better to connect storage ports across different engines and directors 6 than to use all the ports on a single director. In this way, even in a case of a component failure, the storage can continue to service host I/Os. New to VMAX 3 is dynamic core allocation. Each VMAX 3 director provides services such as front-end connectivity, backend connectivity, or data management. Each such service has its own set of cores on each director that are pooled together to provide CPU resources which can be allocated as necessary. For example, even if host I/Os arrive via a single front-end port on the director, the front-end pool with all its CPU cores will be available to service that port. As I/Os arriving to other directors will have their own core pools, again, for best performance and availability it is recommended to connect each host to ports on different directors before using additional ports on the same director. PHYSICAL HOST CONNECTIVITY CONSIDERATIONS Host connectivity considerations include two aspects. The first is the number and speed of the HBA ports (initiators) and the second is the number and size of host devices. HBA ports considerations: Each HBA port (initiator) creates a path for I/Os between the host and the SAN switch which then continues to the VMAX 3 storage. If a host was to only use a single HBA port it will have a single I/O path that has to serve all I/Os. Such design is not advisable as a single path doesn t provide high-availability, and also risks a potential bottleneck during high I/O activity for the lack of additional ports for load-balancing. A better design provides each database server at least two HBA ports, preferably on two separate HBAs. The additional ports provide more connectivity and also allow multipathing software like EMC PowerPath or Microsoft Multipath I/O (MPIO), to load-balance and failover across HBA paths. Each path between host and storage device creates a SCSI device representation on the host. For example, two HBA ports going to two VMAX front-end adapter ports with a 1:1 relationship create 3 presentations for each host device: one for each path and another that the multipathing software creates as an EMC SYMMETRIX Multi-Path Disk Device (PowerPath System Devices). If each HBA port was zoned and masked to both FA ports (1: many relationship) there will be 5 SCSI device representations for each host device (one for each path combination + pseudo device). While modern operating systems can manage hundreds of devices, it is not advisable or necessary, and it burdens the user with complex tracking and storage provisioning management overhead. It is therefore recommended to have enough HBA ports to support workload concurrency, availability, and throughput, but use 1:1 relationships to storage front-end ports, and not have each HBA port zoned and masked to all VMAX front-end ports. Such approach provides enough connectivity, availability, and concurrency, yet reduces the complexity of the host registering lots of SCSI devices unnecessarily. 6 Each VMAX 3 engine has two redundant directors 10

11 NUMBER AND SIZE OF HOST DEVICES CONSIDERATIONS VMAX 3 introduces the ability to create host devices with a capacity from a few megabytes to multiple terabytes. With the native striping across the data pools that VMAX 3 provides, the user may be tempted to create only a few very large host devices. Think about the following example: a 1TB Microsoft SQL Server database can reside on a 1 x 1TB host device, or perhaps on 10 x 100GB host devices. While either option satisfies the capacity requirement, it is recommended to use reasonable number of host devices and size. In the example above, if the database capacity was to rise above 1TB, it is likely that the DBA will want to add another device of the same capacity even if they didn t need 2TB in total. Therefore, large host devices create very large building blocks when additional storage is needed. Secondly, each host device creates its own host I/O queue at the operating system. Each such queue can service a tunable, but limited, number of I/Os that can be transmitted simultaneously. If, for example, the host had 4 HBA ports, and a single 1TB LUN (using the previous example again), with multipathing software it will have only 4 paths available to queue I/Os. A high level of database activity will generate more I/Os than the queues can service, resulting in artificially elongated latencies. In this example two or more host devices are advisable to alleviate such an artificial bottleneck. Host software such as EMC PowerPath, or Windows perfmon can help in monitoring host I/O queues to make sure the number of devices and paths is adequate for the workload. Another benefit of using multiple host devices is that internally, the storage array can use more parallelism when operations such as FAST data movement or local and remote replications take place. By performing more copy operations simultaneously, the overall operation takes less time. While there is no one magic number to the size 7 and number of host devices, we recommend finding a reasonable low number that offers enough concurrency, provides an adequate building block for capacity when additional storage is needed, and doesn t become too large to manage. VIRTUAL PROVISIONING AND THIN DEVICES CONSIDERATIONS All VMAX 3 host devices are Virtually Provisioned (also known as Thin Provisioning), meaning they are merely a set of pointers to capacity allocated at 128KB extent granularity in the storage data pools. However, to the host they look and respond just like regular LUNs. Using pointers enables FAST to move the application data between the VMAX 3 data pools without affecting the host. It also allows better capacity efficiency for TimeFinder snapshots by sharing of extents when data doesn t change between snapshots. Virtual provisioning offers a choice of whether to fully allocate the host device capacity, or allow it to do allocation on-demand. A fully allocated device consumes all its capacity in the data pool on creation, and therefore, there is no risk that future writes may fail if the SRP has no capacity left 8. On the other hand, allocation on-demand allows over-provisioning, meaning that although the storage devices are created and look to the host as available with their full capacity, actual capacity is only allocated in the data pools when host writes occur. This is a common cost saving practice. Allocation on-demand is suitable in situations when: Application s capacity growth rate is unknown, and The user prefers to not commit large amounts of storage ahead of time, as it may never get used, and The user prefers to not disrupt host operations at a later time by adding more devices. Therefore, if allocation on demand is leveraged capacity will only be physically assigned as it is needed to meet application requirements. 7 Follow best practice to Quick format NTFS volumes using Microsoft recommended 64KB allocation unit size for data, logs, TEMPDB. 8 FAST allocates capacity in the appropriate data pools based on the workload and SLO. However, when a data pool is full, FAST may use other pools in the SRP to prevent host I/O failure. 11

12 In general when data files are created, Microsoft SQL Server pre-allocates capacity, by writing to every page, with contiguous zeros. When allocation on-demand is used, it is best to deploy a strategy where database capacity is grown over time based on actual need. For example, if SQL Server was provisioned with a thin device of 2TB, rather than immediately creating data files of 2TB and consuming all its space, the DBA should use auto-growth feature that consume only the capacity as needed. Note: In cases where Windows Instant File Initialization (IFI) is involved, allocation of datafiles happens in a thin-pool friendly way. Areas of a disk under which a sparse file is defined, as created by Instant File Initialization, are not zeroed. As table and index information is written to a fully initialized data file, areas of the database become allocated and used by non-zero user data. SQL Server automatically uses Windows Instant File Initialization (IFI), provided the service account under which the SQL Server service is running has Perform volume maintenance tasks permission under the local security policy. By default, only administrators have this permission. Information regarding the Instant File Initialization functionality (IFI) is provided in the Microsoft SQL Server Books online product documentation. MICROSOFT SQL SERVER AND DATA SKEW SQL Server databases perform both random and sequential I/O operations. Skewed data access is a phenomenon that is present in most databases. Skewing is when a small number of SQL Server LUNs, or the data files resident on those LUNs, receive a large percentage of the I/O from the applications. Data access skewing also commonly occurs in the database environments as most recently created data is typically accessed more frequently. As data ages the relative importance becomes lower and hence activity level on that also drops. This skewed type of activity can be transient, that is to say lasting only a short period of time, or persistent and lasting much longer periods of time. Skewed, persistent access to storage allocations makes those allocations good candidates for promotion to higher-performing storage tier. Skewing by its nature means that there are SQL Server data structures that are receiving lower activity than others. If the storage allocations servicing those structures are persistently receiving fewer requests than others, then they are good candidates for FAST to down-tier. By using sub-lun level FAST, bottlenecks based on skewed workloads to a limited set of spindles are avoided completely. MICROSOFT SQL SERVER DATA TYPES AND THE CHOICE OF SLO The following sections describe considerations for various SQL Server data types and selection of SLOs to achieve desired performance. Planning SLO for SQL Server databases VMAX 3 storage arrays can support many enterprise applications, together with all their replication needs and auxiliary systems (such as test, development, reporting, patch-testing, and others). With FAST and Service Level Objective (SLO) management, it is easy to provide the right amount of resources to each such environment with ease, and modify it as business priorities or performance needs change over time. This section discusses some of the considerations regarding different SQL Server data types and SLO assignment for them. When choosing SLO for the SQL Server databases, consider the following: While FAST operates at a sub-lun granularity to satisfy SLO and workload demands, the SLO is set at a storage group granularity (a group of devices). It is therefore important to match the storage group to sets of devices of equal application and business priority. Consider that with VMAX 3 all writes go to the cache, which is persistent, and thus uses lazy writer to the backend storage. Therefore, unless other reasons are in play (such as synchronous remote replications, long I/O queues, or a system that is over-utilized), write latency should always be very low (cache-hit), regardless of the SLO or disk technology storing the data. On a well-balanced system, SLO s primary affect is on read latency. 12

13 In general, EMC recommends for mission critical databases to separate the data files, the log files, TEMPDB to distinct sets of devices or child storage groups will allow setting different SLOs, thereby allowing I/O latency and manageability: SQL Server data files: The primary data file (.mdf), secondary data files (.ndf), and a transaction log file (.ldf), are all associated with every SQL Server database. Microsoft SQL Server database environments are created by defining one or more filegroups (the PRIMARY file group will always exist). Each file group is subsequently defined to exist on one or more data files located on NTFS volumes. Due to the proportional fill mechanism used by SQL Server, data files within a given filegroup will generate almost identical I/O patterns. This highly correlated workload is primary motivation for the best practice recommendation to utilize multiple data files within a filegroup to distribute I/O load. SQL Server transaction logs: Active transaction log files are formatted when a Microsoft SQL Server database is created. At the time of creation, every single page of the log files is written to so that the log files become fully provisioned when they are initialized. The log file contains mostly contiguous zeroes when first written, but over time transactions are written to the log files and later truncated based on log backup or checkpoint operations. Transaction logs are written to for all activities such as data modifications, data loads, index rebuilds. Storage allocations used for SQL Server transaction log files are based on a different style of I/O activity. Pre-written space on the devices provides optimum write performance, as the allocations are dedicated and available for the SQL Server transaction log file. Overall performance of a highly active SQL Server database servicing an Online Transaction Processing (OLTP) workload can be adversely affected by long latencies to the transaction log. Usually latency counters less than 10ms is good, while anything over 50ms indicates I/O bottleneck. 9 Latencies result from the Write Ahead Logging (WAL) feature of SQL Server. WAL ensures that SQL Server returns to a transactionally consistent state even after a server or system outage. The WAL feature persists all updates and inserts to the transaction log before committing them to the data files. The combination of the current state of the data files and the transaction log allows SQL Server to recover in the event of a failure. This functionality requires that a commit for a transaction must be persisted to the log file before the transaction returns a status to the user process. Longer latencies for the log writes will reduce user process performance. Transaction log writes are small I/O which are cached by VMAX 3 and also coalesced by VMAX 3 for optimized disk writes. Frequency of the backup and truncation of the transaction logs would result in large sequential reads so latency of such I/O will determine overall performance of the database. SQL Server TEMPDB: TEMPDB is a system database that is used to maintain temporary sort areas, stored procedures, etc. Of all the system databases, TEMPDB can be the destination of significant I/O load. The usage cycle of Microsoft SQL Server TEMPDB storage deployed on virtually provisioned storage is interesting to follow and understand. Initially, as TEMPDB is simply another SQL Server database the rules for data files and log file apply. The transaction log files will be fully allocated at creation, and data files will be allocated on an as-needed basis. The TEMPDB system database is utilized by SQL Server to provide semi-persistent storage for certain operations. The amount of activity to the TEMPDB data and log files may be significant in certain environments, depending on the style of Transact-SQL statements being executed and certain SQL Server functionality. SQL Server recreates the TEMPDB storage when a new instance of the SQL Server environment is initiated. Even if there is no viable persistent data stored within TEMPDB, the thin extents will remain allocated, when SQL Server instance is restarted, and the TEMPDB is re-initialized. The total thin extent allocation will be based on the maximum TEMPDB utilization, but will never exceed the maximum size of the TEMPDB 10 itself. Depending on the usage of TEMDB for large search, batch jobs, small table transactions or online indexing operations, the TEMPDB may experience large sequential IO or small random I/O workload. Most of the usage patterns of TEMPDB will follow very low amount of locality of reference hence so such I/O may experience low cache hit ratios and hence higher disk IOs making disk latencies very critical for TEMPDB operations. Keeping TEMPDB in a separate storage group will allow for appropriate SLO selection for TEMPDB. 9 Refer to MSDN Blogs How To: Troubleshooting SQL Server I/O bottlenecks 10 While it would be typical for a production database system to require additional storage allocations for system databases such as TEMPDB, the OLTP workload used in this white paper, was not used to generate workload against TEMPDB. As a result, these additional storage allocations were not considered. 13

14 The following section will address SLO considerations for these data types. SLO considerations for SQL Server data files A key part of performance planning for the SQL Server database is understanding the business priority of the application it serves, and with large databases it can also be important to understand the structure of the associated data files. A default SLO can be used for the whole database for simplicity, but when more control over database performance is necessary, a distinctive SLO should be used, together with a workload type. The choice of workload type is rather simple; for databases focused on sequential reads/writes, a DSS type should be used. For databases that either focus on transactional applications (OLTP), or mixed workloads such as both transactional and reporting, an OLTP type should be used. If storage remote replication is used (SRDF), add: with Replications to the workload type. When to use Diamond SLO: Diamond SLO is only available when EFDs are available in the SRP. It tells FAST to move all the allocated storage extents in that storage group to EFDs, regardless of the I/O activity to them. Diamond provides the best read I/O latency as flash technology is best for random reads. Diamond is also popular for mission critical databases servicing many users, where the system is always busy, or even when each group of users start their workload intermittently and expect high-performance with low-latency. By having the whole storage group using EFDs, it doesn t matter when a user becomes active to provide them with best performance. When to use Bronze SLO: Bronze SLO doesn t allow the storage group to leverage EFDs, regardless of the I/O activity. It is a good choice for databases that don t require stringent performance and should let more critical applications utilize capacity on EFDs. For example, databases can use a Bronze SLO when their focus is development, test, and reports. Another use for Bronze SLO is for gold copies of the database. When to use Optimized SLO: Optimized SLO is a good default when FAST should make the best decisions based on actual workload and for the storage array as a whole. Because Optimized SLO uses the widest range of allowed I/O latencies, FAST will attempt to give the active extents in the storage group the best performance (including EFDs if possible). However, if there are competing workloads with explicit SLO, they may get priority for the faster storage tiers, based on the lower latency requirements other SLOs have. When to use Silver, Gold, or Platinum SLO: as explained earlier, each SLO provides a range of allowed I/O latency that FAST will work to maintain. Provide the SLO that best fits the application based on business and performance needs. Refer to Table 1 and Figure 3 to determine the desirable SLO. SLO considerations for SQL Server log files Typically the I/O generated to a transaction log is sequential write stream, with occasional read entries when user transactions abort. Sizing I/O capacity for a LUN that serves as storage for a transaction log is therefore different from that required for a data file. SQL Server uses a write-ahead logging mechanism (WAL), where change descriptions (log records) related to the current transaction are first committed to the transaction log file, before it is written to the data file. Once there are sufficient number of dirty pages, the lazy writer or the checkpoint process then write it to the disk. The checkpoint runs periodically to make sure that the dirty pages in cache are kept to a small number, if you need to recover the database. VMAX 3 provides support for transaction log write operations from global memory. All write operations are acknowledged as complete when they are saved to cache. All cache is mirrored and is protected by the vaulting process should the need arise. Write operations received by the VMAX 3 environment are persisted to disk, and provide full support for SQL Server WAL requirements. Since the cache acknowledges all write operations, VMAX 3 depends less on the physical storage required to service the write stream. VMAX 3 may coalesce multiple discrete write operations into a small number of larger write operations, which improves the efficiency of transaction log writes while reducing the I/O demand. Since all writes in VMAX 3 go to cache, the SLO has limited effect on log performance. Sizing and performance for the transaction log volumes is generally easily met, by setting the storage group to any SLO since they are write latency critical, almost instantly acknowledged by VMAX 3 cache, not backend storage technology. Considering this, log files can use any SLO, since they are write latency critical and the write latency has only to do with VMAX 3 cache, not backend storage technology. Therefore, normally, log files can use Optimized (default) SLO or the same SLO as is used for the data files. In special cases, where the DBA wants the logs on the best storage tiers, Platinum or Diamond can be used instead. 14

15 SLO considerations for SQL Server TEMPDB TEMPDB usage is an important aspect for more customer configurations, and the considerations described for the user database will apply to the TEMPDB as well. SLO considerations for All-Flash workloads When a workload either requires a predictable low-latency / high-iops performance, or perhaps when many users with intermittent workload peaks use a consolidated environment, each requiring high-performance during their respective activity time, an All-Flash performance is suitable. All-Flash deployment is also suitable when data center power and floor-space are limited, and high performance consolidated environment is desirable. Note: unlike All-Flash appliances, VMAX 3 offers a choice of a single EFD tier, or multiple tiers. Since most databases require additional capacity for replicas, test/dev environments and other copies of the production data, consider a hybrid array for these replicas, and simply assign the production data to the Diamond SLO. SLO considerations for noisy neighbor and competing workloads In highly consolidated environments, many databases and applications compete for storage resources. FAST can provide each with the appropriate performance when specific SLO and workload types are specified. By using different SLOs for each such application (or group of applications), it is easy to manage such a consolidated environment, and modify the SLOs when business requirements change. Refer to the next section for additional ways of controlling performance in a consolidated environment. HOST I/O LIMITS AND MULTI-TENANCY The Host I/O Limits quality of service (QOS) feature was introduced in the previous generation of VMAX arrays but it continues to offer VMAX 3 customers the option to place specific IOPs or bandwidth limits on any storage group, regardless of the SLO assigned to that group. Assigning a specific Host I/O limit for IOPS, for example, to a storage group with low performance requirements can ensure that a spike in I/O demand will not saturate its storage, cause FAST inadvertently to migrate extents to higher tiers, or overload the storage, affecting performance of more critical applications. Placing a specific IOPs limit on a storage group will limit the total IOPs for the storage group, but it does not prevent FAST from moving data based on the SLO for that group. For example, a storage group with Gold SLO may have data in both EFD and HDD tiers to satisfy the I/O latency goals, yet limited to the IOPS provided by Host I/O limit. 15

16 USING CASCADED STORAGE GROUPS VMAX 3 offers cascaded Storage Groups (SGs) wherein multiple child storage groups can be associated with a single parent storage group for ease of manageability and for storage provisioning. This provides flexibility by associating different SLOs to individual child storage groups to manage service levels for various application objects, and using the cascaded storage groups for storage provisioning. The following Figure 4 shows an SQL server using Single storage group, whereas Figure 5 shows an example of a cascaded storage group. Figure 4 Single storage group 16

17 Figure 5 Cascaded storage group As shown in Figure 5 above, SQL Server datafiles storage group is set to use Gold SLO whereas SQL Server transaction log storage group is set to use Bronze SLO. Both storage groups are part of a cascaded storage group SQL_DB_SG, which can be used to provision all the database devices to the host, or multiple hosts in the case of a cluster. 17

18 SQL SERVER DATABASE PROVISIONING STORAGE PROVISIONING TASKS WITH VMAX 3 Since VMAX 3 comes pre-configured with data pools and a Storage Resource Pool (SRP), what is left to do is to create the host devices, and make them visible to the hosts by an operation called device masking. Note: Remember that zoning at the switch sets the physical connectivity that device masking defines more closely. Zoning needs to be set ahead of time between the host initiators and the storage ports that will be used for device masking tasks. Device creation is an easy task and can be performed in a number of ways: 1) Using Unisphere for VMAX 3 UI Intuitive Provisioning Wizards 2) Using Solutions Enabler CLI to create a Masking View Device masking is also an easy task and includes the following steps: 1) Creation of an Initiator Group (IG). Initiator group is the list of host HBA port WWNs to which the devices will be visible. 2) Creation of Storage Group (SG). Since storage groups are used for both FAST SLO management and storage provisioning review the discussion on cascaded storage groups earlier. 3) Creation of a Port Group (PG). Port group is the group of VMAX 3 front-end ports where the host devices will be mapped and visible. 4) Creation of a Masking View (MV). Masking view brings together a combination of SG, PG, and IG. Device masking controls host access to storage enabling a secure multi-tenant environment. For example, storage ports can be shared across many servers, but only the masking view determines which devices of the server will have access to which ports. 18

19 PROVISIONING SQL SERVER DATABASE STORAGE WITH UNISPHERE This section covers storage provisioning for Microsoft SQL Server databases using Unisphere for VMAX. Creation of a host Initiator Group (IG) Provisioning storage requires creation of host initiator groups by specifying the host HBA WWN ports. To create host IG select the appropriate VMAX 3 storage, then select Hosts tab. Select from the list of initiator WWNs, as shown in Figure 6. Figure 6 Create Initiator Group 19

20 Creation of Storage Group (SG) A storage group defines a group of one or more host devices. Using the SG creation screen, a storage group name is specified and new storage devices can be created and placed into the storage group together with their initial SLO. If more than one group of devices is requested, each group creates a child SG and can take its own unique SLO. Storage group creation screen is seen in Figure 7. Figure 7 Create Storage Group 20

21 Select host(s) In this step the hosts to which the new storage will be provisioned are selected. This is done by selecting an IG (host HBA ports), as shown in Figure 8 Figure 8 Select Initiator Group 21

22 Creation of Port Group (PG) A Port Group defines which of the VMAX front-end ports will map and mask the new devices. A new port group can be created, or an existing one selected, as seen in Figure 9. Unisphere will automatically select the ports where your host or cluster initiators are logged in. Figure 9 Create Port Group 22

DEPLOYMENT BEST PRACTICE FOR ORACLE DATABASE WITH VMAX 3 SERVICE LEVEL OBJECTIVE MANAGEMENT

DEPLOYMENT BEST PRACTICE FOR ORACLE DATABASE WITH VMAX 3 SERVICE LEVEL OBJECTIVE MANAGEMENT DEPLOYMENT BEST PRACTICE FOR ORACLE DATABASE WITH VMAX 3 SERVICE LEVEL OBJECTIVE MANAGEMENT EMC VMAX Engineering White Paper ABSTRACT With the introduction of the third generation VMAX disk arrays, Oracle

More information

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe White Paper CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe Simplified configuration, deployment, and management for Microsoft SQL Server on Symmetrix VMAXe Abstract This

More information

MANAGING MICROSOFT SQL SERVER WORKLOADS BY SERVICE LEVELS ON EMC VMAX3

MANAGING MICROSOFT SQL SERVER WORKLOADS BY SERVICE LEVELS ON EMC VMAX3 MANAGING MICROSOFT SQL SERVER WORKLOADS BY SERVICE LEVELS ON EMC VMAX3 EMC VMAX3, Microsoft SQL Server 2014, EMC TimeFinder SnapVX Powerful mission-critical enterprise storage Simplified storage provisioning

More information

EMC Symmetrix V-Max and Microsoft SQL Server

EMC Symmetrix V-Max and Microsoft SQL Server EMC Symmetrix V-Max and Microsoft SQL Server Applied Technology Abstract This white paper examines deployment and integration of Microsoft SQL Server solutions on the EMC Symmetrix V-Max Series with Enginuity.

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

Storage Tiering for Microsoft SQL Server and EMC Symmetrix VMAX with Enginuity 5874

Storage Tiering for Microsoft SQL Server and EMC Symmetrix VMAX with Enginuity 5874 Storage Tiering for Microsoft SQL Server and EMC Symmetrix VMAX with Enginuity 5874 Applied Technology Abstract This white paper examines the application of managing Microsoft SQL Server in a storage environment

More information

EMC VMAX3 FAMILY VMAX 100K, 200K, 400K

EMC VMAX3 FAMILY VMAX 100K, 200K, 400K EMC VMAX3 FAMILY VMAX 100K, 200K, 400K The EMC VMAX3 TM family delivers the latest in Tier-1 scale-out multi-controller architecture with unmatched consolidation and efficiency for the enterprise. With

More information

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash

More information

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Symmetrix V-Max with SRDF/CE, EMC Replication Manager, and Enterprise Flash Drives Reference Architecture Copyright 2009 EMC Corporation.

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

EMC AUTOMATED PERFORMANCE OPTIMIZATION for MICROSOFT APPLICATIONS

EMC AUTOMATED PERFORMANCE OPTIMIZATION for MICROSOFT APPLICATIONS White Paper EMC AUTOMATED PERFORMANCE OPTIMIZATION for MICROSOFT APPLICATIONS Automated performance optimization Cloud-ready infrastructure Simplified, automated management EMC Solutions Group Abstract

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

EMC VFCACHE ACCELERATES ORACLE

EMC VFCACHE ACCELERATES ORACLE White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes

More information

EMC VMAX3 FAMILY - VMAX 100K, 200K, 400K

EMC VMAX3 FAMILY - VMAX 100K, 200K, 400K SPECIFICATION SHEET EMC VMAX3 FAMILY - VMAX 100K, 200K, 400K The EMC VMAX3 TM family delivers the latest in Tier-1 scale-out multi-controller architecture with unmatched consolidation and efficiency for

More information

STORAGE TIERING FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAX WITH ENGINUITY 5875

STORAGE TIERING FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAX WITH ENGINUITY 5875 White Paper STORAGE TIERING FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAX WITH ENGINUITY 5875 Abstract This white paper examines the application of managing Microsoft SQL Server in a storage environment

More information

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

Virtualizing Microsoft SQL Server on Dell XC Series Web-scale Converged Appliances Powered by Nutanix Software. Dell XC Series Tech Note

Virtualizing Microsoft SQL Server on Dell XC Series Web-scale Converged Appliances Powered by Nutanix Software. Dell XC Series Tech Note Virtualizing Microsoft SQL Server on Dell XC Series Web-scale Converged Appliances Powered by Nutanix Software Dell XC Series Tech Note The increase in virtualization of critical applications such as Microsoft

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

Violin Memory Arrays With IBM System Storage SAN Volume Control

Violin Memory Arrays With IBM System Storage SAN Volume Control Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This

More information

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) IntelliMagic, Inc. 558 Silicon Drive Ste 101 Southlake, Texas 76092 USA Tel: 214-432-7920

More information

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS Leverage EMC and VMware To Improve The Return On Your Oracle Investment ESSENTIALS Better Performance At Lower Cost Run

More information

Microsoft Private Cloud Fast Track Reference Architecture

Microsoft Private Cloud Fast Track Reference Architecture Microsoft Private Cloud Fast Track Reference Architecture Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with NEC s

More information

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability

More information

IBM Storwize V5000. Designed to drive innovation and greater flexibility with a hybrid storage solution. Highlights. IBM Systems Data Sheet

IBM Storwize V5000. Designed to drive innovation and greater flexibility with a hybrid storage solution. Highlights. IBM Systems Data Sheet IBM Storwize V5000 Designed to drive innovation and greater flexibility with a hybrid storage solution Highlights Customize your storage system with flexible software and hardware options Boost performance

More information

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead

More information

HGST Virident Solutions 2.0

HGST Virident Solutions 2.0 Brochure HGST Virident Solutions 2.0 Software Modules HGST Virident Share: Shared access from multiple servers HGST Virident HA: Synchronous replication between servers HGST Virident ClusterCache: Clustered

More information

EMC VMAX3 SERVICE LEVEL OBJECTIVES AND SNAPVX FOR ORACLE RAC 12c

EMC VMAX3 SERVICE LEVEL OBJECTIVES AND SNAPVX FOR ORACLE RAC 12c EMC VMAX3 SERVICE LEVEL OBJECTIVES AND SNAPVX FOR ORACLE RAC 12c Perform one-click, on-demand provisioning of multiple, mixed Oracle workloads with differing Service Level Objectives Non-disruptively adjust

More information

Application Workload Control Using Host I/O Limits for SQL Server on EMC Symmetrix VMAX

Application Workload Control Using Host I/O Limits for SQL Server on EMC Symmetrix VMAX WHITE PAPER Application Workload Control Using Host I/O Limits for Server on EMC Symmetrix VMAX Abstract The Symmetrix VMAX is an ideal consolidation platform designed to be simple, cost effective and

More information

ESRP Storage Program. EMC Symmetrix VMAX (100,000 User) Exchange 2010 Mailbox Resiliency Storage Solution. EMC Global Solutions

ESRP Storage Program. EMC Symmetrix VMAX (100,000 User) Exchange 2010 Mailbox Resiliency Storage Solution. EMC Global Solutions ESRP Storage Program EMC Symmetrix VMAX (100,000 User) Exchange 2010 Mailbox Resiliency Storage Solution Tested with: ESRP - Storage Version 3.0 Tested Date: March, 2010 EMC Global Solutions Copyright

More information

EMC Celerra Unified Storage Platforms

EMC Celerra Unified Storage Platforms EMC Solutions for Microsoft SQL Server EMC Celerra Unified Storage Platforms EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008, 2009 EMC

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

CONSOLIDATING MICROSOFT SQL SERVER OLTP WORKLOADS ON THE EMC XtremIO ALL FLASH ARRAY

CONSOLIDATING MICROSOFT SQL SERVER OLTP WORKLOADS ON THE EMC XtremIO ALL FLASH ARRAY Reference Architecture CONSOLIDATING MICROSOFT SQL SERVER OLTP WORKLOADS ON THE EMC XtremIO ALL FLASH ARRAY An XtremIO Reference Architecture Abstract This Reference architecture examines the storage efficiencies

More information

EMC SYMMETRIX VMAX 10K

EMC SYMMETRIX VMAX 10K EMC SYMMETRIX VMAX 10K EMC Symmetrix VMAX 10K with the Enginuity operating environment delivers a true Tier-1 multi-controller, scale-out architecture with consolidation and efficiency for the enterprise.

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

EMC XtremSF: Delivering Next Generation Performance for Oracle Database White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support Agreement Joint Use Of Technology CEO

More information

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 EXECUTIVE SUMMARY Microsoft Exchange Server is a disk-intensive application that requires high speed storage to deliver

More information

Technical Paper. Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage

Technical Paper. Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage Technical Paper Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage Paper Title Table of Contents Introduction... 1 BRIEF OVERVIEW OF VMAX ARCHITECTURE... 1 PHYSICAL STORAGE DISK TYPES, FA PORTS,

More information

Frequently Asked Questions: EMC UnityVSA

Frequently Asked Questions: EMC UnityVSA Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the

More information

Business white paper Invest in the right flash storage solution

Business white paper Invest in the right flash storage solution Business white paper Invest in the right flash storage solution A guide for the savvy tech buyer Business white paper Page 2 Introduction You re looking at flash storage because you see it s taking the

More information

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment

More information

A Dell Technical White Paper Dell Compellent

A Dell Technical White Paper Dell Compellent Fluid Data Storage: Driving Flexibility in the Data Center Eight Must-Have Technologies for the IT Director A Dell Technical White Paper Dell Compellent THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY,

More information

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores

More information

The functionality and advantages of a high-availability file server system

The functionality and advantages of a high-availability file server system The functionality and advantages of a high-availability file server system This paper discusses the benefits of deploying a JMR SHARE High-Availability File Server System. Hardware and performance considerations

More information

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009

More information

Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015

Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015 Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015 Table of Contents 0 Introduction 1 The Test Environment 1 Best

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

FLASH STORAGE SOLUTION

FLASH STORAGE SOLUTION Invest in the right FLASH STORAGE SOLUTION A guide for the savvy tech buyer Introduction You re looking at flash storage because you see it s taking the storage world by storm. You re interested in accelerating

More information

Technical White Paper Integration of ETERNUS DX Storage Systems in VMware Environments

Technical White Paper Integration of ETERNUS DX Storage Systems in VMware Environments White Paper Integration of ETERNUS DX Storage Systems in ware Environments Technical White Paper Integration of ETERNUS DX Storage Systems in ware Environments Content The role of storage in virtual server

More information

Nimble Storage Best Practices for Microsoft Exchange

Nimble Storage Best Practices for Microsoft Exchange BEST PRACTICES GUIDE: Nimble Storage Best Practices for Microsoft Exchange Table of Contents NIMBLE STORAGE OVERVIEW... 3 EXCHANGE STORAGE REFERENCE ARCHITECTURE... 3 Store Database and Transaction Log

More information

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage

More information

Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER

Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform The benefits

More information

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either

More information

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark.

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark. IOmark- VDI HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark, and IOmark

More information

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN Table of contents Executive summary... 2 Introduction... 2 Solution criteria... 3 Hyper-V guest machine configurations...

More information

Best Practices for Microsoft

Best Practices for Microsoft SCALABLE STORAGE FOR MISSION CRITICAL APPLICATIONS Best Practices for Microsoft Daniel Golic EMC Serbia Senior Technology Consultant Daniel.golic@emc.com 1 The Private Cloud Why Now? IT infrastructure

More information

Fluid Data Storage: Driving Flexibility in the Data Center

Fluid Data Storage: Driving Flexibility in the Data Center WHITE PAPER I FEBRUARY 2010 Fluid Storage: Driving Flexibility in the Center Eight Must Have Technologies for the IT Director. Fluid Storage: Driving Flexibility in the Center I WHITE PAPER 1 EXECUTIVE

More information

Navisphere Quality of Service Manager (NQM) Applied Technology

Navisphere Quality of Service Manager (NQM) Applied Technology Applied Technology Abstract Navisphere Quality of Service Manager provides quality-of-service capabilities for CLARiiON storage systems. This white paper discusses the architecture of NQM and methods for

More information

Nimble Storage for VMware View VDI

Nimble Storage for VMware View VDI BEST PRACTICES GUIDE Nimble Storage for VMware View VDI N I M B L E B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R V M W A R E V I E W V D I 1 Overview Virtualization is an important

More information

How A V3 Appliance Employs Superior VDI Architecture to Reduce Latency and Increase Performance

How A V3 Appliance Employs Superior VDI Architecture to Reduce Latency and Increase Performance How A V3 Appliance Employs Superior VDI Architecture to Reduce Latency and Increase Performance www. ipro-com.com/i t Contents Overview...3 Introduction...3 Understanding Latency...3 Network Latency...3

More information

EMC VMAX3 FAMILY. Enterprise Data Services Platform For Mission Critical Hybrid Cloud And Hyper-Consolidation ESSENTIALS POWERFUL

EMC VMAX3 FAMILY. Enterprise Data Services Platform For Mission Critical Hybrid Cloud And Hyper-Consolidation ESSENTIALS POWERFUL EMC VMAX3 FAMILY Enterprise Data Services Platform For Mission Critical Hybrid Cloud And Hyper-Consolidation ESSENTIALS Achieve predictable performance at massive scale for extreme-growth hybrid cloud

More information

EMC Symmetrix V-Max with Veritas Storage Foundation

EMC Symmetrix V-Max with Veritas Storage Foundation EMC Symmetrix V-Max with Veritas Storage Foundation Applied Technology Abstract This white paper details the benefits of deploying EMC Symmetrix V-Max Virtual Provisioning and Veritas Storage Foundation

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

AX4 5 Series Software Overview

AX4 5 Series Software Overview AX4 5 Series Software Overview March 6, 2008 This document presents an overview of all software you need to configure and monitor any AX4 5 series storage system running the Navisphere Express management

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

DB2 for z/os Best Practices with Virtual Provisioning

DB2 for z/os Best Practices with Virtual Provisioning White Paper DB2 for z/os Best Practices with Virtual Provisioning Abstract This white paper provides an overview of EMC Virtual Provisioning in a mainframe environment and makes bestpractice recommendations

More information

Microsoft SQL Server 2005 on Windows Server 2003

Microsoft SQL Server 2005 on Windows Server 2003 EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference

More information

Backup and Recovery Best Practices With CommVault Simpana Software

Backup and Recovery Best Practices With CommVault Simpana Software TECHNICAL WHITE PAPER Backup and Recovery Best Practices With CommVault Simpana Software www.tintri.com Contents Intended Audience....1 Introduction....1 Consolidated list of practices...............................

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

Microsoft SQL Server 2014 Fast Track

Microsoft SQL Server 2014 Fast Track Microsoft SQL Server 2014 Fast Track 34-TB Certified Data Warehouse 103-TB Maximum User Data Tegile Systems Solution Review 2U Design: Featuring Tegile T3800 All-Flash Storage Array http:// www.tegile.com/solutiuons/sql

More information

Dynamic Disk Pools Technical Report

Dynamic Disk Pools Technical Report Dynamic Disk Pools Technical Report A Dell Technical White Paper Dell PowerVault MD3 Dense Series of Storage Arrays 9/5/2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Proven Solution Guide

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Proven Solution Guide Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published March, 2011 EMC believes the information in

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

HP ProLiant DL580 Gen8 and HP LE PCIe Workload WHITE PAPER Accelerator 90TB Microsoft SQL Server Data Warehouse Fast Track Reference Architecture

HP ProLiant DL580 Gen8 and HP LE PCIe Workload WHITE PAPER Accelerator 90TB Microsoft SQL Server Data Warehouse Fast Track Reference Architecture WHITE PAPER HP ProLiant DL580 Gen8 and HP LE PCIe Workload WHITE PAPER Accelerator 90TB Microsoft SQL Server Data Warehouse Fast Track Reference Architecture Based on Microsoft SQL Server 2014 Data Warehouse

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

BUSINESS CONTINUITY AND DISASTER RECOVERY FOR ORACLE 11g

BUSINESS CONTINUITY AND DISASTER RECOVERY FOR ORACLE 11g BUSINESS CONTINUITY AND DISASTER RECOVERY FOR ORACLE 11g ENABLED BY EMC VMAX 10K AND EMC RECOVERPOINT Technical Presentation EMC Solutions Group 1 Agenda Business case Symmetrix VMAX 10K overview RecoverPoint

More information

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com EMC Backup and Recovery for SAP with IBM DB2 on IBM AIX Enabled by EMC Symmetrix DMX-4, EMC CLARiiON CX3, EMC Replication Manager, IBM Tivoli Storage Manager, and EMC NetWorker Reference Architecture EMC

More information

Delivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator

Delivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator enterprise White Paper Delivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator Performance Test Results for Analytical (OLAP) and Transactional (OLTP) SQL Server 212 Loads Allon

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

Optimizing SQL Server AlwaysOn Implementations with OCZ s ZD-XL SQL Accelerator

Optimizing SQL Server AlwaysOn Implementations with OCZ s ZD-XL SQL Accelerator White Paper Optimizing SQL Server AlwaysOn Implementations with OCZ s ZD-XL SQL Accelerator Delivering Accelerated Application Performance, Microsoft AlwaysOn High Availability and Fast Data Replication

More information

Evaluation of Enterprise Data Protection using SEP Software

Evaluation of Enterprise Data Protection using SEP Software Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &

More information