DEPLOYMENT BEST PRACTICE FOR ORACLE DATABASE WITH VMAX 3 SERVICE LEVEL OBJECTIVE MANAGEMENT

Size: px
Start display at page:

Download "DEPLOYMENT BEST PRACTICE FOR ORACLE DATABASE WITH VMAX 3 SERVICE LEVEL OBJECTIVE MANAGEMENT"

Transcription

1 DEPLOYMENT BEST PRACTICE FOR ORACLE DATABASE WITH VMAX 3 SERVICE LEVEL OBJECTIVE MANAGEMENT EMC VMAX Engineering White Paper ABSTRACT With the introduction of the third generation VMAX disk arrays, Oracle database administrators have a new way to deploy a wide range of applications in a single high-performance, high capacity, self-tuning storage environment that can dynamically manage each application s performance requirements with minimal effort. May, 2015 EMC WHITE PAPER

2 To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local representative or authorized reseller, visit or explore and compare products in the EMC Store. Copyright 2015 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. Part Number H

3 TABLE OF CONTENTS EXECUTIVE SUMMARY... 4 AUDIENCE... 4 VMAX 3 PRODUCT OVERVIEW... 4 VMAX 3 Overview... 4 VMAX 3 and Service Level Objetive (SLO) based provisioning... 6 STORAGE DESIGN PRINCIPLES FOR ORACLE ON VMAX Storage connectivity considerations Host connectivity considerations Number and size of host devices considerations Virtual Provisioning and thin devices considerations Partition Alignment Considerations for x86-based platforms ASM and database striping considerations Oracle data types and the choice of SLO Host I/O Limits and multi-tenancy Using cascaded storage groups ORACLE DATABASE PROVISIONING Storage provisioning tasks with VMAX Provisioning Oracle database storage with Unisphere Provisioning Oracle database storage with Solutions Enabler CLI ORACLE SLO MANAGEMENT TEST USE CASES Test Configuration Test Overview Test case 1 Single database run with GRADUAL change of SLO Test case 2 All flash configuration with Oracle DATA and REDO CONCLUSION REFERENCES APPENDIX Solutions Enabler CLI commands for SLO management and monitoring

4 EXECUTIVE SUMMARY VMAX 3 family of storage arrays is the next major step in evolving VMAX hardware and software targeted to meet new industry challenges of scale, performance, and availability. At the same time, VMAX 3 has taken a leap in making complex operations of storage management, provisioning, and setting performance goals, simple to execute and manage. VMAX 3 family of storage arrays come pre-configured from factory to simplify deployment at customer sites and minimize time to first I/O. Each array uses Virtual Provisioning to allow the user easy and quick storage provisioning. While VMAX 3 can ship as an all-flash array with the combination of Enterprise Flash Drives (EFD 1 ) and large cache that accelerates both writes and reads even farther, it also excels in providing Fully Automated Storage Tiering (FAST 2 ) enabled performance management based on service level goals across multiple tiers of storage. VMAX 3 new hardware architecture comes with more CPU power, larger persistent cache, and a new Dynamic Virtual Matrix dual InfiniBand fabric interconnect that creates an extremely fast internal memory-to-memory and data-copy fabric. Many enhancements were introduced to VMAX 3 replication software to support new capabilities such as TimeFinder SnapVX local replication to allow for hundreds of snapshots that can be incrementally refreshed or restored, and can cascade any number of times. Also SRDF remote replication software adds new features and capabilities that provide more robust remote replication support. VMAX 3 adds the ability to connect directly to Data Domain system so database backups can be sent directly from the primary storage to the Data Domain system without having to go through the host first. VMAX 3 also offers embedded file support known as Embedded NAS (enas) via a new hypervisor layer new to VMAX 3, in addition to traditional block storage. This white paper explains the basic VMAX 3 design changes with regard to storage provisioning and performance management, and how they simplify the management of storage and affect Oracle database layout decisions. It explains the new FAST architecture for managing Oracle databases performance using Service Level Objectives (SLOs) and provides guidelines and best practices for its use. AUDIENCE This white paper is intended for database and system administrators, storage administrators, and system architects who are responsible for implementing, managing, and maintaining Oracle databases and VMAX 3 storage systems. It is assumed that readers have some familiarity with Oracle and the EMC VMAX 3 family of storage arrays, and are interested in achieving higher database availability, performance, and ease of storage management. VMAX 3 PRODUCT OVERVIEW VMAX 3 OVERVIEW The EMC VMAX 3 family of storage arrays is built on the strategy of simple, intelligent, modular storage, and incorporates a Dynamic Virtual Matrix interface that connects and shares resources across all VMAX 3 engines, allowing the storage array to seamlessly grow from an entry-level configuration into the world s largest storage array. It provides the highest levels of performance and availability featuring new hardware and software capabilities. The newest additions to the EMC VMAX 3 family, VMAX 100K, 200K and 400K, deliver the latest in Tier-1 scale-out multi-controller architecture with consolidation and efficiency for the enterprise. With enhanced hardware and software, the new VMAX 3 array provides unprecedented performance and scale. It offers dramatic increases in floor tile density (GB/Ft 2 ) with engines and high capacity disk enclosures, for both 2.5" and 3.5" drives, consolidated in the same system bay. Figure 1 shows possible VMAX 3 components. Refer to EMC documentation and release notes to find the most up to date supported components. In addition, VMAX 3 arrays can be configured as either hybrid or all-flash arrays. All VMAX 3 models come pre-configured from the factory to significantly shorten the time from installation to first I/O. 1 Enterprise Flash Drives (EFD) are SSD Flash drives that are designed for high performance and resiliency suited for enterprise applications 2 Fully Automated Storage Tiering (FAST) allows VMAX 3 storage to automatically and dynamically manage performance service level goals across the available storage resources to meet the application I/O demand, even as new data is added, and access patterns continue to change over time. 4

5 1 8 redundant VMAX 3 Engines Up to 4 PB usable capacity Up to 256 FC host ports Up to 16 TB global memory (mirrored) Up to 384 Cores, 2.7 GHz Intel Xeon E v2 Up to 5,760 drives SSD Flash drives 200/400/800 GB 2.5 / GB 1.2 TB 10K RPM SAS drives 2.5 / GB 15K RPM SAS drives 2.5 /3.5 2 TB/4 TB SAS 7.2K RPM 3.5 Figure 1 VMAX 3 storage array 3 VMAX 3 engines provide the foundation to the storage array. Each fully redundant engine contains two VMAX 3 directors and redundant interfaces to the new Dynamic Virtual Matrix dual InfiniBand fabric interconnect. Each director consolidates front-end, global memory, and back-end functions, enabling direct memory access to data for optimized I/O operations. Depending on the array chosen, up to eight VMAX 3 engines can be interconnected via a set of active fabrics that provide scalable performance and high availability. New to VMAX 3 design, host ports are no longer mapped directly to CPU resources. CPU resources are allocated as needed using pools (front-end, back-end, or data services pools) of CPU cores which can service all activity in the VMAX 3 array. This shared multi-core architecture reduces I/O path latencies by facilitating system scaling of processing power without requiring additional drives or front-end connectivity. VMAX 3 arrays introduce the industry s first open storage and hypervisor converged operating system, HYPERMAX OS. It combines industry-leading high availability, I/O management, data integrity validation, quality of service, and storage tiering and data security with an open application platform. HYPERMAX OS features a real-time, non-disruptive, storage hypervisor that manages and protects embedded data services (running in virtual machines) by extending VMAX high availability to these data services that traditionally have run external to the array (such as Unisphere). HYPERMAX OS runs on top of the Dynamic Virtual Matrix leveraging its scale out flexibility of cores, cache, and host interfaces. The embedded storage hypervisor reduces external hardware and networking requirements, delivers the highest levels of availability, and dramatically lower latency. All storage in the VMAX 3 array is virtually provisioned. VMAX Virtual Provisioning enables users to simplify storage management and increase capacity utilization by sharing storage among multiple applications and only allocating storage as needed from a shared pool of physical disks known as Storage Resource Pool (SRP). The array uses the dynamic and intelligent capabilities of FAST to meet specified Service Level Objectives (SLOs) throughout the lifecycle of each application. VMAX 3 SLOs and SLO provisioning are new to the VMAX 3 family, and tightly integrated with EMC FAST software to optimize agility and array performance across all drive types in the system. While VMAX 3 can ship in an all-flash configuration, when purchased with hybrid drive types as a combination of flash and hard drives, EMC FAST technology can improve application performance and at the same time reduce cost by intelligently using a combination of high-performance flash drives with cost-effective high-capacity hard disk drives. 3 Additional drive types and capacities may be available. Contact EMC representative for more details. 5

6 For local replication, VMAX 3 adds a new feature to TimeFinder software called SnapVX, which provides support for a greater number of snapshots. Unlike previous VMAX snapshots, SnapVX snapshots do not require the use of dedicated target devices. SnapVX allows for up to 256 snapshots per individual source. These snapshots can copy (referred to as link-copy) their data to new target devices and re-link to update just the incremental data changes of previously linked devices. For remote replication, SRDF adds new capabilities and features to provide protection for Oracle databases and applications. All user data entering VMAX 3 is T10 DIF protected, including replicated data and data on disks. T10 DIF protection can be expanded all the way to the host and application to provide full end-to-end data protection for Oracle databases using either Oracle ASMlib with UEK, or ASM Filter Driver on a variety of supported Linux operating systems 4. VMAX 3 AND SERVICE LEVEL OBJETIVE (SLO) BASED PROVISIONING Introduction to FAST in VMAX 3 With VMAX 3, FAST is enhanced to include both intelligent storage provisioning and performance management, using Service Level Objectives (SLOs). SLOs automate the allocation and distribution of application data to the correct data pool (and therefore storage tier) without manual intervention. Simply choose the SLO (for example, Platinum, Gold, or silver) that best suits the application requirement. SLOs are tied to expected average I/O latency for both reads and writes and therefore, both the initial provisioning and application on-going performance are automatically measured and managed based on compliance to storage tiers and performance goals. Every 10 minutes FAST samples the storage activity, and when necessary, moves data at FAST s sub-lun granularity, which is 5.25MB (42 extents of 128KB). SLOs can be dynamically changed at any time (be promoted or demoted), and FAST continuously monitors and adjusts data location at the sub-lun granularity across the available storage tiers to match the performance goals provided. All this is done automatically, within the VMAX 3 storage array, without having to deploy complex application ILM 5 strategy or use host resources for migrating data due to performance needs. VMAX 3 FAST Components Figure 2 depicts the elements of FAST that form the basis for SLO based management, as described below. Physical disk group provides grouping of physical storage (Flash, or hard disk drives) based on drive types. All drives in a disk group have the same technology, capacity, form factor, and speed. The disk groups are pre-configured at the factory, based on the specified configuration requirements at the time of purchase. Data Pool is a collection of RAID protected internal devices (also known as TDATs, or thin data devices) that are carved out of a single physical disk group. Each data pool can belong to a single SRP (see definition below), and provides a tier of storage based on its drive technology and RAID protection. Data pools can allocate capacity for host devices or replications. Data pools are also preconfigured at the factory to provide optimal RAID protection and performance. Storage Resource Pool (SRP) is a collection of data pools that provides FAST a domain for capacity and performance management. By default, a single default SRP is factory pre-configured. Additional SRPs can be created with an EMC service engagement. The data movements performed by FAST are done within the boundaries of the SRP and are covered in detail later in this paper. Storage Group (SG) is a collection of host devices (LUNs) that consume storage capacity from the underlying SRP. Because both FAST and storage provisioning operations are managed at a storage group level, storage groups can be cascaded (hierarchical) to allow different levels of granularity required for each operation (see cascaded storage group section later). Host devices (LUNs) are the components of a storage group. In VMAX 3 all host devices are virtual and at the time of creation can be fully allocated or thin. Virtual means that they are a set of pointers to data in the data pools, allowing FAST to manage the data location across data pools seamlessly. Fully allocated means that the device s full capacity is reserved in the data pools even before the host has access to the device. Thin means that although the host sees the LUN with its full reported capacity, in reality no capacity is allocated from the data pools until explicitly written to by the host. All host devices are natively striped across the data pools where they are allocated with granularity of a single VMAX 3 track size, which is 128KB. 4 Consult EMC Simple Support Matrix for VMAX Dif1 Director Bit Settings document for more information on T10 DIF supported HBA and operating systems. 5 Information Lifecycle Management (ILM) refers to a strategy of managing application data based on policies. It usually involves complex data analysis, mapping, and tracking practices. 6

7 Service Level Objectives (SLO) provides a pre-defined set of service levels (such as Platinum, Gold or Silver) that can be supported by the underlying SRP. Each SLO has a specific performance goal and some have also tier 6 -compliance goals from the underlying SRP that FAST will work to satisfy. For example, Bronze SLO will attempt to have no data on EFDs and Platinum SLO will attempt to have no data on 7.2k rpm drives. An SLO defines an expected average response time target for a storage group. By default, all host devices and all storage groups are attached to the Optimized SLO, (which will assure I/Os are serviced from the most appropriate data pool for their workload), but in cases where more deterministic performance goals are needed, specific SLOs can be specified. Figure 2 VMAX 3 architecture and service level provisioning 6 Storage tiers refer to combinations of disk drive and RAID protection that create unique storage service levels. For example Flash (SSD) drives with RAID5 protection, 10k or 15k RPM drives with RAID1 protection, etc. 7

8 Service Level Objectives (SLO) and Workload Types Overview Each storage resource pool (SRP) contains a set of known storage resources, as seen in Figure 2. Based on the available resources in the SRP, HYPERMAX OS will offer a list of available Service Level Objectives (SLOs) that can be met using this particular SRP, as shows in Table 1. This assures SLOs can be met, and that SRPs aren t provisioned beyond their ability to meet application requirements. Note: Since SLOs are tied to the available drive types, it is important to plan the requirements for a new VMAX 3 system carefully. EMC works with customers using a new and easy to use Sizer tool to assist with this task. Table 1 Service Level Objectives SLO Minimum required drive combinations to list SLO Performance expectation Diamond EFD Emulating EFD performance Platinum EFD and (15K or 10K) Emulating performance between 15K drive and EFD Gold EFD and (15K or 10K or 7.2K) Emulating 15K drive performance Silver EFD and (15K or 10K or 7.2K) Emulating 10K drive performance Bronze 7.2K and (15K or 10K) Emulating 7.2K drive performance Optimized Any System optimized performance By default, no specific SLO needs to be selected as by default all data in the VMAX 3 storage array receives Optimized SLO. System Optimized SLO meets the performance and compliance requirements by dynamically placing the most active data in the highest performing tier and less active data in low performance high capacity tiers. Note: Optimized SLO offers an optimal balance of resources and performance across the whole SRP, based on I/O load, type of I/Os, data pool utilization, and available capacities in the pools. It will place the most active data on higher performing storage and least active data on the most cost-effective storage. If data pools capacity or utilization is stressed it will attempt to alleviate it by using other pools. However, when specific storage groups (database LUNs) require a more deterministic SLO, one of the other available SLOs can be selected. For example, a storage group holding critical Oracle data files can receive a Diamond SLO while the Oracle logs files can be put on Platinum. A less critical application can be fully contained in Gold or Silver SLO. Refer to Oracle Data Types and the Choice of SLO section later in the paper. Once an SLO is selected (other than Optimized), it can be further qualified by a Workload type: OLTP or DSS, where OLTP workload is focused on optimizing performance for small block I/O and DSS workload is focused on optimizing performance for large block I/O. The Workload Type can also specify whether to account for any overhead associated with replication (local or remote). The workload type qualifiers for replication overhead are OLTP_Rep and DSS_Rep, where Rep denotes replicated. 8

9 Understanding SLO Definitions and Workload Types Each SLO is effectively a reference to an expected response-time range (minimum and maximum allowed latencies) for host I/Os, where a particular Expected Average Response Time is attached to each SLO and workload combination. The Solutions Enabler CLI or Unisphere for VMAX can list the available service levels and workload combinations, as seen in Figure 3 (see command line syntax example to list available SLOs in the Appendix), they only list the expected average latency, not the range of values. Without a workload type, the latency range is the widest for its SLO type. When a workload type is added, the range is reduced, due to the added information. When Optimized is selected (which is also the default SLO for all storage groups, unless the user assigns another), the latency range is in fact the full latency spread that the SRP can satisfy, based on its known and available components. Figure 3 Unisphere shows available SLOs Important SLO considerations: Because an SLO references a range of target host I/O latencies, the smaller the spread the more predictable is the result. It is therefore recommended to select both an SLO as well as a workload type. For example: Platinum SLO with OLTP workload and no replications. Because an SLO references an Expected Average Response Time, it is possible for two applications executing a similar workload and set with the same SLO to perform slightly different. This can happen if the host I/O latency still falls within the allowed range. For that reason it is recommended to use a workload type together with an SLO when a smaller range of latencies is desirable. Note: SLOs can be easily changed using Solutions Enabler or Unisphere for VMAX. Also, when necessary to add additional layers of SLOs, the Storage Group (SG) can be easily changed into cascaded SG so each child or the parent can receive its appropriate SLO. 9

10 STORAGE DESIGN PRINCIPLES FOR ORACLE ON VMAX 3 VMAX 3 storage provisioning has become much simpler than previous releases. Since VMAX 3 physical disk groups, data pools, and even the default SRP come pre-configured from factory, based on inputs to the Sizer tool that helps size them correctly, the only thing required is configure connectivity between your hosts and the VMAX 3 and then start provisioning host devices. The following sections discuss the principles and considerations for storage connectivity and provisioning for Oracle. STORAGE CONNECTIVITY CONSIDERATIONS When planning storage connectivity for performance and availability it is recommended to go-wide before going deep, which means it is better to connect storage ports across different engines and directors 7 than to use all the ports on a single director. In this way, even in a case of a component failure, the storage can continue to service host I/Os. New to VMAX 3 is dynamic core allocation. Each VMAX 3 director provides services such as front-end connectivity, backend connectivity, or data management. Each such service has its own set of cores on each director that are pooled together to provide CPU resources which can be allocated as necessary. For example, even if host I/Os arrive via a single front-end port on the director, the front-end pool with all its CPU cores will be available to service that port. As I/Os arriving to other directors will have their own core pools, again, for best performance and availability it is recommended to connect each host to ports on different directors before using additional ports on the same director. HOST CONNECTIVITY CONSIDERATIONS Host connectivity considerations include two aspects. The first is the number and speed of the HBA ports (initiators) and the second is the number and size of host devices. HBA ports considerations: Each HBA port (initiator) creates a path for I/Os between the host and the SAN switch, which then continues to the VMAX 3 storage. If a host was to only use a single HBA port it will have a single I/O path that has to serve all I/Os. Such design is not advisable as a single path doesn t provide high-availability, and also risks a potential bottleneck during high I/O activity for the lack of additional ports for load-balancing. A better design provides each database server at least two HBA ports, preferably on two separate HBAs. The additional ports provide more connectivity and also allow multipathing software like EMC PowerPath or Linux Device-Mapper, to load-balance and failover across HBA paths. Each path between host and storage device creates a SCSI device representation on the host. For example, two HBA ports going to two VMAX front-end adapter ports with a 1:1 relationship create 3 presentations for each host device: one for each path and another that the multipathing software creates as a pseudo device (such as /dev/emcpowera, or /dev/dm-1, etc.). If each HBA port was zoned and masked to both FA ports (1 too many relationship) there will be 5 SCSI device representations for each host device (one for each path combination + pseudo device). While modern operating systems can manage hundreds of devices, it is not advisable or necessary, and it burdens the user with complex tracking and storage provisioning management overhead. It is therefore recommended to have enough HBA ports to support workload concurrency, availability, and throughput, but use 1:1 relationships to storage front-end ports, and not have each HBA port zoned and masked to all VMAX front-end ports. Such approach provides enough connectivity, availability, and concurrency, yet reduces the complexity of the host registering lots of SCSI devices unnecessarily. 7 Each VMAX 3 engine has two redundant directors 10

11 NUMBER AND SIZE OF HOST DEVICES CONSIDERATIONS VMAX 3 introduces the ability to create host devices with a capacity from a few megabytes to multiple terabytes. With the native striping across data pools that VMAX 3 provides, the user may be tempted to create only a few very large host devices. Think about the following example: a 10TB Oracle database can reside on a 1 x 10TB host device, or perhaps on 10 x 1TB host devices. While either option satisfies the capacity requirement, it is recommended to use reasonable number of host devices and size. In the example above, if the database capacity was to rise above 10TB, it is likely that the DBA will want to add another device of the same capacity (which is an Oracle ASM best practice), even if they didn t need 20TB in total. Therefore, large host devices create very large building blocks when additional storage is needed. Secondly, each host device creates its own host I/O queue at the operating system. Each such queue can service a tunable, but limited, number of I/Os simultaneously. If, for example, the host had only 4 HBA ports, and only a single 10TB LUN (using the previous example again), with multipathing software it will have only 4 paths available to queue I/Os. A high level of database activity will generate more I/Os than the queues can service, resulting in artificially elongated latencies. In this example two more host devices are advisable in this case to alleviate such an artificial bottleneck. Host software such as EMC PowerPath or iostat can help in monitoring host I/O queues to make sure the number of devices and paths is adequate for the workload. Another benefit of using multiple host devices is that internally the storage array can use more parallelism when operations such as FAST data movement or local and remote replications take place. By performing more copy operations simultaneously, the overall operation takes less time. While there is no one magic number to the size and number of host devices, we recommend finding a reasonable low number that offers enough concurrency, provides an adequate building block for capacity when additional storage is needed, and doesn t become too large to manage. VIRTUAL PROVISIONING AND THIN DEVICES CONSIDERATIONS All VMAX 3 host devices are Virtually Provisioned (also known as Thin Provisioning), meaning they are merely a set of pointers to capacity allocated at 128KB extent granularity in the storage data pools. However, to the host they look and respond just like normal LUNs. Using pointers allows FAST to move the application data between the VMAX 3 data pools without affecting the host. It also allows better capacity efficiency for TimeFinder snapshots by sharing of extents when data doesn t change between snapshots. Virtual provisioning offers a choice of whether to fully allocate the host device capacity, or allow it to do allocation on-demand. A fully allocated device consumes all its capacity in the data pool on creation, and therefore, there is no risk that future writes may fail if the SRP has no capacity left 8. On the other hand, allocation on-demand allows over-provisioning, meaning that although the storage devices are created and look to the host as available with their full capacity, actual capacity is only allocated in the data pools when host writes occur. This is a common cost saving practice. Allocation on-demand is suitable in situations when: Application s capacity growth rate is unknown, and The user prefers to not commit large amounts of storage ahead of time, as it may never get used, and The user prefers to not disrupt host operations at a later time by adding more devices. Therefore, if allocation on demand is leveraged, capacity will only be physically assigned as it is needed to meet application requirements. Note: Allocation on-demand works very well with Oracle ASM in general, as ASM tends to write over deleted areas in the ASM disk group and re-use the space efficiently. When ASM Filter Driver is used, deleted capacity can be easily reclaimed in the SRP. This is done by adding a Thin attribute to the ASM disk group, and performing a manual ASM rebalance. 8 FAST allocates capacity in the appropriate data pools based on the workload and SLO. However, when a data pool is full, FAST may use other pools in the SRP to prevent host I/O failure. 11

12 Since Oracle pre-allocates capacity in the storage when database files are created, when allocation on-demand is used, it is best to deploy a strategy where database capacity is grown over time based on actual need. For example, if ASM was provisioned with a thin device of 2TB, rather than immediately creating data files of 2TB and consuming all its space, the DBA should create data files that consume only the capacity necessary for the next few months, adding more data files at a later time, or increasing their size, based on need. PARTITION ALIGNMENT CONSIDERATIONS FOR X86-BASED PLATFORMS ASM requires at least one partition on each host LUN. Some operating systems (such as Solaris) also require at least one partition for user data. Due to a legacy BIOS architecture, by default, x86-base operating systems tend to create partitions with an offset of 63 blocks, or 63*512bytes = 31.5K. This offset is not aligned with VMAX track boundary (128KB for VMAX 3 ). As a result I/Os crossing track boundaries may be requested in two operations, causing unnecessary overhead and a potential for performance problems. Note: It is strongly recommended to align host partition of VMAX devices to an offset such as 1MB (2048 blocks). Use the Linux parted command, or the expert mode in fdisk command to move the partition offset. The following example illustrates use of the parted Linux command with dm-multipath or PowerPath: # DM-Multipath: for i in {1..32} do parted -s /dev/mapper/ora_data$i mklabel msdos parted -s /dev/mapper/ora_data$i mkpart primary 2048s 100% done # PowerPath for i in ct cu cv cw cx cy cz da db dc dd de do parted -s /dev/emcpower$i mklabel msdos parted -s /dev/emcpower$i mkpart primary 2048s 100% done The following example illustrates use of the fdisk command: [root@dsib0063 scripts]# fdisk /dev/mapper/ora_data1... Command (m for help): n (create a new partition) Command action e extended p primary partition (1-4) p (this will be a primary partition) Partition number (1-4): 1 (create the first partition) First cylinder ( , default 1):[ENTER] (use default) Using default value 1 Last cylinder, +cylinders or +size{k,m,g} ( , default 13054):[ENTER] (use full LUN capacity) Using default value Command (m for help): x (change to expert command mode) Expert command (m for help): p (print partition table) Disk /dev/mapper/ora_data1: 255 heads, 63 sectors, cylinders Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID Expert command (m for help): b (move partition offset) Partition number (1-4): 1 (move partition 1 offset) New beginning of data ( , default 63): 2048 (offset of 1MB) Expert command (m for help): p (print partition table again) Disk /dev/mapper/ora_data1: 255 heads, 63 sectors, cylinders Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID 12

13 Expert command (m for help): w (write updated partition table) ASM AND DATABASE STRIPING CONSIDERATIONS Host striping occurs when a host allocates capacity to a file and the storage allocations don t all take place as a contiguous allocation on a single host device. Instead, the file s storage allocation is spread (striped) across multiple host devices to provide more concurrency, even though to anyone trying to read or write to the file, it seems like it is contiguous. When the Oracle database issues reads and writes randomly across the data files, striping is not of great importance, since the access pattern is random anyways. However, when a file is read or written to sequentially, striping can be of great benefit as it spreads the workload across multiple storage devices, creating more parallelism of execution and, often, higher performance. Without striping, the workload is directed to a single host device, with limited ability for parallelism. Oracle ASM natively stripes its content across the ASM members (storage devices). ASM uses two types of striping: the first, which is the default for most Oracle data types, is called Coarse striping, and it allocates capacity across ASM disk group 9 member s roundrobin, with a 1MB default allocation unit (AU), or stripe-depth. ASM AU can be sized from 1MB (default) up to 64MB. The second type of ASM striping is called Fine-Grain striping, and is used by default only for the control files. Fine-Grain striping divides the ASM members into groups of 8, allocates an AU on each, and stripes the newly created data at 128KB across the 8 members, until the AU on each of the members is full. Then it selects another 8 members and repeats the process until all user data is written. This process usually takes place during Oracle files initialization, when the DBA creates data files, tablespaces, or a database. The type of striping for each Oracle data type is kept in ASM templates which are associated with the ASM disk groups. Existing ASM extents are not affected by template changes and therefore it is best to set the ASM templates correctly as soon as the ASM disk group is created. To inspect the ASM templates execute the following command: SQL> select name, stripe from V$ASM_TEMPLATE; Typically, ASM default behavior is adequate for most workloads. However, when Oracle databases expects a high update rate, which generates a lot of redo logs, EMC recommends setting the redo logs ASM template 10 to fine-grain instead of coarse to create better concurrency. To change the database redo logs template, execute the following command on the ASM disk group holding the logs: SQL> ALTER DISKGROUP <REDO_DG> ALTER TEMPLATE onlinelog ATTRIBUTES (FINE); When Oracle databases are created with a focus on sequential reads or writes, such as for Analytics applications, Decision Support, and Data Warehouses. In such cases, EMC recommends setting the data files template to fine-grain, and increasing the allocation unit from 1MB default to 4MB or 8MB. The following example shows how to change the AU size of a disk group during creation: SQL> CREATE DISKGROUP <DSS_DG> EXTERNAL REDUNDANCY DISK 'AFD:ORA_DEV1' SIZE 10G, 'AFD:ORA_DEV2' SIZE 10G ATTRIBUTE 'compatible.asm'=' ','au_size'='8m'; The following example shows how to change the stripe type or DSS_DG disk group to fine-grain: SQL> ALTER DISKGROUP <DSS_DG> ALTER TEMPLATE datafile ATTRIBUTES (FINE); In a similar fashion, also tempfile template can be modified to use fine-grain striping for applications where a lot of temp files are generated. 9 EMC recommends no ASM mirroring (i.e. external redundancy) which creates a single ASM failure group. However, when ASM mirroring is used, or similarly, when multiple ASM failure groups are manually created, the striping will occur within a failure group rather than at a disk group level. 10 Since each ASM disk group has its own template settings, modifications such as for redo logs template should only take place in the appropriate disk group where the logs reside. 13

14 ORACLE DATA TYPES AND THE CHOICE OF SLO The following sections describe considerations for various Oracle data types and selection of SLOs to achieve desired performance. Planning SLO for Oracle databases VMAX 3 storage arrays can support many enterprise applications, together with all their replication needs and auxiliary systems (such as test, development, reporting, patch-testing, and others). With FAST and Service Level Objective (SLO) management, it is easy to provide the right amount of resources to each such environment with ease, and modify it as business priorities or performance needs change over time. This section discusses some of the considerations regarding different Oracle data types and SLO assignment for them. When choosing SLO for the Oracle database, consider the following: While FAST operates at a sub-lun granularity to satisfy SLO and workload demands, the SLO is set at a storage group granularity (a group of devices). It is therefore important to match the storage group to sets of devices of equal application and business priority (for example, a storage group can contain one or more ASM disk groups, but a single ASM disk group should never be divided to multiple storage groups with more than a single SLO, since Oracle stripes the data across it). Consider that with VMAX 3 all writes go to the cache, which is persistent, and thus uses lazy writer to the backend storage. Therefore, unless other reasons are in play (such as synchronous remote replications, long I/O queues, or a system that is over-utilized), write latency should always be very low (cache-hit), regardless of the SLO or disk technology storing the data. On a well-balanced system, SLO s primary affect is on read latency. In general, EMC recommends for mission critical databases to separate the following data types to distinct set of devices (using an ASM example): +GRID (when RAC is configured): when RAC is installed, it keeps the cluster configuration file and quorum devices inside the initial ASM disk group. When RAC is used, EMC recommends only for this disk group to use Normal, or High ASM redundancy (double or triple ASM mirroring). The reason is that it is very small in size so mirroring hardly makes a difference; however, it tells Oracle to create more quorum devices. All other disk groups should normally use External redundancy, leveraging capacity saving and VMAX 3 RAID protection. Note: Don t mix database data with +GRID if storage replication is used as cluster information is unique to its location. If a replica is to be mounted on another host a different +GRID can be pre-created there with the correct cluster information for that location. +DATA: minimum of one disk group for data and control files. Large databases may use more disk groups for data files, based on business needs, retention policy, etc. Each such disk group can have its own SLO, using a VMAX 3 storage group or a cascaded storage group to set it. +REDO: online redo logs. A single ASM disk group, or sometimes two (when logs are multiplexed). It is recommended to separate data from logs for performance reasons, but also so when TimeFinder is used for backup/recovery, a restore of the data file devices will not over-write the redo logs. +TEMP (optional): typically temp files can reside with data files, however, when TEMP is very active, and very large, the DBA may decide to separate it to its own ASM disk group and thus allow a different SLO and performance management. The DBA may also decide to separate TEMP to its own devices when storage replications are used, since temp files don t need to be replicated (can be easily recreated if needed), saving bandwidth for remote replications. +FRA: typically for archive and/or flashback logs. If flashback logs consume a lot of space, the DBA may decide to separate archive from flashback logs. 14

15 The following section will address SLO considerations for these data types. SLO considerations for Oracle data files A key part of performance planning for the Oracle database is understanding the business priority of the application it serves, and with large databases it can also be important to understand the structure of the schemas, tablespaces, partitions, and the associated data files. A default SLO can be used for the whole database for simplicity, but when more control over database performance is necessary, a distinctive SLO should be used, together with a workload type. The choice of workload type is rather simple; for databases focused on sequential reads/writes, a DSS type should be used. For databases that either focus on transactional applications (OLTP), or mixed workloads such as both transactional and reporting, an OLTP type should be used. If storage remote replication is used (SRDF), add: with Replications to the workload type. When to use Diamond SLO: Diamond SLO is only available when EFDs are available in the SRP. It tells FAST to move all the allocated storage extents in that storage group to EFDs, regardless of the I/O activity to them. Diamond provides the best read I/O latency as flash technology is best for random reads. Diamond is also popular for mission critical databases servicing many users, where the system is always busy, or even when each group of users start their workload intermittently and expect high-performance with low-latency. By having the whole storage group using EFDs, it doesn t matter when a user becomes active to provide them with best performance. When to use Bronze SLO: Bronze SLO doesn t allow the storage group to leverage EFDs, regardless of the I/O activity. It is a good choice for databases that don t require stringent performance and should let more critical applications utilize capacity on EFDs. For example, databases can use Bronze SLO when their focus is development, test, and reports. Another use for Bronze SLO is for gold copies of the database. When to use Optimized SLO: Optimized SLO is a good default when FAST should make the best decisions based on actual workload and for the storage array as a whole. Because Optimized SLO uses the widest range of allowed I/O latencies, FAST will attempt to give the active extents in the storage group the best performance (including EFDs if possible). However, if there are competing workloads with explicit SLO, they may get priority for the faster storage tiers, based on the smaller latency range other SLOs have. When to use Silver, Gold, or Platinum SLO: as explained earlier, each SLO provides a range of allowed I/O latency that FAST will work to maintain. Provide the SLO that best fits the application based on business and performance needs. Refer to Table 1 and Figure 3 to determine the desirable SLO. SLO considerations for log files An active redo log file exhibits sequential write IO by the log writer, and once the log is switched, typically an archiver process will initiate sequential read I/O from that file. Since all writes in VMAX 3 go to cache, the SLO has limited effect on log performance. Archiver reads are not latency critical so there is no need to dedicate high performance storage for Archiver. Considering this, Oracle logs can use any SLO, since they are write latency critical and the write latency has only to do with VMAX 3 cache, not backend storage technology. Therefore, normally, Oracle log files can use Optimized (default) SLO or the same SLO as is used for the data files. In special cases, where the DBA wants the logs on the best storage tiers, Platinum or Diamond can be used instead. SLO considerations for TEMP and ARCHIVE Logs Temp files use sequential reads and sequential writes I/O profile, and archive logs use sequential write I/O profile. In both cases any SLO will suffice, where low-latency SLO (such as Diamond, or Platinum) should likely be kept for other Oracle file types that focus on smaller I/O and are more random read in nature. Unless there are specific performance needs for these file types, Optimized SLO can be used for simplicity. SLO considerations for INDEXES Often Index access is performed in memory. Also, often Index is mixed with the data files and shares their SLO. However, when indexes are large, they may incur a lot of storage I/Os. In that case it may be useful to separate them to their own LUNs (or ASM disk group) and use a low-latency SLO (such as Gold, Platinum, or even Diamond) as Index access is typically random and latency critical. 15

16 SLO considerations for All-Flash workloads When a workload either requires a predictable low-latency / high-iops performance, or perhaps when many users with intermittent workload peaks use a consolidated environment, each requiring high-performance during their respective activity time, an All-Flash performance is suitable. All-Flash deployment is also suitable when data center power and floor-space are limited, and high performance consolidated environment is desirable. Note: Unlike All-Flash appliances, VMAX 3 offers a choice of a single EFD tier, or multiple tiers. Since most databases require additional capacity for replicas, test/dev environments and other copies of the production data, consider a hybrid array for these replicas, and simply assign the production data to the Diamond SLO. SLO considerations for noisy neighbor and competing workloads In highly consolidated environments, many databases and applications compete for storage resources. FAST can provide each with the appropriate performance when specific SLO and workload types are specified. By using different SLOs for each such application (or group of applications), it is easy to manage such a consolidated environment, and modify the SLOs when business requirements change. Refer to the next section for additional ways of controlling performance in a consolidated environment. HOST I/O LIMITS AND MULTI-TENANCY The Host I/O Limits quality of service (QOS) feature was introduced in the previous generation of VMAX arrays but it continues to offer VMAX 3 customers the option to place specific IOPs or bandwidth limits on any storage group, regardless of the SLO assigned to that group. Assigning a specific Host I/O limit for IOPS, for example, to a storage group with low performance requirements can ensure that a spike in I/O demand will not saturate its storage, cause FAST inadvertently to migrate extents to higher tiers, or overload the storage, affecting performance of more critical applications. Placing a specific IOPs limit on a storage group will limit the total IOPs for the storage group, but it does not prevent FAST from moving data based on the SLO for that group. For example, a storage group with Gold SLO may have data in both EFD and HDD tiers to satisfy the I/O latency goals, yet limited to the IOPS provided by Host I/O limit. USING CASCADED STORAGE GROUPS VMAX 3 offers cascaded Storage Groups (SGs) wherein multiple child storage groups can be associated with a single parent storage group for ease of manageability and for storage provisioning. This provides flexibility by associating different SLOs to individual child storage groups to manage service levels for various application objects, and using the cascaded storage groups for storage provisioning. Figure 4 shows an Oracle server using cascaded storage group. The Oracle +DATA ASM disk group is set to use Gold SLO whereas +REDO ASM disk group is set to use Silver SLO. Both storage groups are part of a cascaded storage group Oracle_DB_SG, which can be used to provision all the database devices to the host, or multiple hosts in the case of a cluster. Figure 4 Cascaded storage group 16

17 ORACLE DATABASE PROVISIONING STORAGE PROVISIONING TASKS WITH VMAX 3 Since VMAX 3 comes pre-configured with data pools and a Storage Resource Pool (SRP), what is left to do is to create the host devices, and make them visible to the hosts by an operation called device masking. Note: Remember that zoning at the switch sets the physical connectivity that device masking defines more closely. Zoning needs to be set ahead of time between the host initiators and the storage ports that will be used for device masking tasks. Device creation is an easy task and can be performed in a number of ways: 1) Using Unisphere for VMAX 3 UI 2) Using Solutions Enabler CLI 3) Using 11 Oracle Enterprise Manager 12c Cloud Control DBaaS plugin for VMAX 3. Device masking is also an easy task and includes the following steps: 1) Creation of an Initiator Group (IG). Initiator group is the list of host HBA port WWNs to which the devices will be visible. 2) Creation of a Storage Group (SG). Since storage groups are used for both FAST SLO management and storage provisioning, review the discussion on cascaded storage groups earlier. 3) Creation of a Port Group (PG). Port group is the group of VMAX 3 front-end ports where the host devices will be mapped and visible. 4) Creation of a Masking View (MV). Masking view brings together a combination of SG, PG, and IG. Device masking helps in controlling access to storage. For example, storage ports can be shared across many servers, but only the masking view determines which of the server will have access to the appropriate devices and storage ports. 11 At the time this paper is written Oracle EM plugin for VMAX 3 provisioning and cloning are planned, but not available yet. Check Oracle webpage for available EM 12c Cloud Control plugins or contact EMC. 17

18 PROVISIONING ORACLE DATABASE STORAGE WITH UNISPHERE This section covers storage provisioning for Oracle databases using Unisphere for VMAX. Creation of a host Initiator Group (IG) Provisioning storage requires creation of host initiator groups by specifying the host HBA WWN ports. To create a host IG select the appropriate VMAX storage, then select Hosts tab. Select from the list of initiator WWNs, as shown in Figure 5. Figure 5 Create Initiator Group 18

19 Creation of Storage Group (SG) A storage group defines a group of one or more host devices. Using the SG creation screen, a storage group name is specified and new storage devices can be created and placed into the storage group together with their initial SLO. If more than one group of devices is requested, each group creates a child SG and can take its own unique SLO. Storage group creation screen is seen in Figure 6. Figure 6 Create Storage Group 19

20 Select host(s) In this step the hosts to which the new storage will be provisioned are selected. This is done by selecting an IG (host HBA ports), as shown in Figure 7. Figure 7 Create Initiator Group 20

DEPLOYMENT BEST PRACTICE FOR MICROSOFT SQL SERVER WITH VMAX 3 SLO MANAGEMENT

DEPLOYMENT BEST PRACTICE FOR MICROSOFT SQL SERVER WITH VMAX 3 SLO MANAGEMENT DEPLOYMENT BEST PRACTICE FOR MICROSOFT SQL SERVER WITH VMAX 3 SLO MANAGEMENT EMC VMAX Engineering White Paper ABSTRACT With the introduction of the third generation VMAX disk arrays, Microsoft SQL Server

More information

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

EMC Symmetrix V-Max and Microsoft SQL Server

EMC Symmetrix V-Max and Microsoft SQL Server EMC Symmetrix V-Max and Microsoft SQL Server Applied Technology Abstract This white paper examines deployment and integration of Microsoft SQL Server solutions on the EMC Symmetrix V-Max Series with Enginuity.

More information

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe White Paper CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe Simplified configuration, deployment, and management for Microsoft SQL Server on Symmetrix VMAXe Abstract This

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information

MANAGING MICROSOFT SQL SERVER WORKLOADS BY SERVICE LEVELS ON EMC VMAX3

MANAGING MICROSOFT SQL SERVER WORKLOADS BY SERVICE LEVELS ON EMC VMAX3 MANAGING MICROSOFT SQL SERVER WORKLOADS BY SERVICE LEVELS ON EMC VMAX3 EMC VMAX3, Microsoft SQL Server 2014, EMC TimeFinder SnapVX Powerful mission-critical enterprise storage Simplified storage provisioning

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

EMC VMAX3 SERVICE LEVEL OBJECTIVES AND SNAPVX FOR ORACLE RAC 12c

EMC VMAX3 SERVICE LEVEL OBJECTIVES AND SNAPVX FOR ORACLE RAC 12c EMC VMAX3 SERVICE LEVEL OBJECTIVES AND SNAPVX FOR ORACLE RAC 12c Perform one-click, on-demand provisioning of multiple, mixed Oracle workloads with differing Service Level Objectives Non-disruptively adjust

More information

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS Leverage EMC and VMware To Improve The Return On Your Oracle Investment ESSENTIALS Better Performance At Lower Cost Run

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

Violin Memory Arrays With IBM System Storage SAN Volume Control

Violin Memory Arrays With IBM System Storage SAN Volume Control Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This

More information

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)

More information

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel A Detailed Review EMC Information Infrastructure Solutions Abstract

More information

ORACLE 11g AND 12c DATABASE CONSOLIDATION AND WORKLOAD SCALABILITY WITH EMC XTREMIO 3.0

ORACLE 11g AND 12c DATABASE CONSOLIDATION AND WORKLOAD SCALABILITY WITH EMC XTREMIO 3.0 ORACLE 11g AND 12c DATABASE CONSOLIDATION AND WORKLOAD SCALABILITY WITH EMC XTREMIO 3.0 Consolidation Oracle Production/Dev/Test workloads in physical and virtual environments Performance Consistent low-latency

More information

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

EMC XtremSF: Delivering Next Generation Performance for Oracle Database White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

An Oracle White Paper May 2011. Exadata Smart Flash Cache and the Oracle Exadata Database Machine

An Oracle White Paper May 2011. Exadata Smart Flash Cache and the Oracle Exadata Database Machine An Oracle White Paper May 2011 Exadata Smart Flash Cache and the Oracle Exadata Database Machine Exadata Smart Flash Cache... 2 Oracle Database 11g: The First Flash Optimized Database... 2 Exadata Smart

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

EMC VFCACHE ACCELERATES ORACLE

EMC VFCACHE ACCELERATES ORACLE White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes

More information

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

EMC VMAX3 FAMILY - VMAX 100K, 200K, 400K

EMC VMAX3 FAMILY - VMAX 100K, 200K, 400K SPECIFICATION SHEET EMC VMAX3 FAMILY - VMAX 100K, 200K, 400K The EMC VMAX3 TM family delivers the latest in Tier-1 scale-out multi-controller architecture with unmatched consolidation and efficiency for

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com EMC Backup and Recovery for SAP with IBM DB2 on IBM AIX Enabled by EMC Symmetrix DMX-4, EMC CLARiiON CX3, EMC Replication Manager, IBM Tivoli Storage Manager, and EMC NetWorker Reference Architecture EMC

More information

EMC Symmetrix VMAX Using EMC SRDF/TimeFinder and Oracle Database 10g/11g

EMC Symmetrix VMAX Using EMC SRDF/TimeFinder and Oracle Database 10g/11g EMC Symmetrix VMAX Using EMC SRDF/TimeFinder and Oracle Database 10g/11g Applied Technology Abstract This white paper introduces EMC Symmetrix VMAX software and hardware capabilities, and provides a comprehensive

More information

Microsoft SQL Server 2005 on Windows Server 2003

Microsoft SQL Server 2005 on Windows Server 2003 EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference

More information

EMC VMAX3 FAMILY VMAX 100K, 200K, 400K

EMC VMAX3 FAMILY VMAX 100K, 200K, 400K EMC VMAX3 FAMILY VMAX 100K, 200K, 400K The EMC VMAX3 TM family delivers the latest in Tier-1 scale-out multi-controller architecture with unmatched consolidation and efficiency for the enterprise. With

More information

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

EMC AUTOMATED PERFORMANCE OPTIMIZATION for MICROSOFT APPLICATIONS

EMC AUTOMATED PERFORMANCE OPTIMIZATION for MICROSOFT APPLICATIONS White Paper EMC AUTOMATED PERFORMANCE OPTIMIZATION for MICROSOFT APPLICATIONS Automated performance optimization Cloud-ready infrastructure Simplified, automated management EMC Solutions Group Abstract

More information

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This

More information

Business white paper Invest in the right flash storage solution

Business white paper Invest in the right flash storage solution Business white paper Invest in the right flash storage solution A guide for the savvy tech buyer Business white paper Page 2 Introduction You re looking at flash storage because you see it s taking the

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

HGST Virident Solutions 2.0

HGST Virident Solutions 2.0 Brochure HGST Virident Solutions 2.0 Software Modules HGST Virident Share: Shared access from multiple servers HGST Virident HA: Synchronous replication between servers HGST Virident ClusterCache: Clustered

More information

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores

More information

VMAX 3 AND ORACLE. Yaron Dar & Udgith Mankad VMAX Partner Engineering ORACLE BEST PRACTICES FOR REPLICATIONS, BACKUP/RECOVERY, AND PROTECTPOINT

VMAX 3 AND ORACLE. Yaron Dar & Udgith Mankad VMAX Partner Engineering ORACLE BEST PRACTICES FOR REPLICATIONS, BACKUP/RECOVERY, AND PROTECTPOINT 1 Yaron Dar & Udgith Mankad VMAX Partner Engineering VMAX 3 AND ORACLE ORACLE BEST PRACTICES FOR REPLICATIONS, BACKUP/RECOVERY, AND PROTECTPOINT 2 ROADMAP INFORMATION DISCLAIMER EMC makes no representation

More information

FLASH STORAGE SOLUTION

FLASH STORAGE SOLUTION Invest in the right FLASH STORAGE SOLUTION A guide for the savvy tech buyer Introduction You re looking at flash storage because you see it s taking the storage world by storm. You re interested in accelerating

More information

BUSINESS CONTINUITY AND DISASTER RECOVERY FOR ORACLE 11g

BUSINESS CONTINUITY AND DISASTER RECOVERY FOR ORACLE 11g BUSINESS CONTINUITY AND DISASTER RECOVERY FOR ORACLE 11g ENABLED BY EMC VMAX 10K AND EMC RECOVERPOINT Technical Presentation EMC Solutions Group 1 Agenda Business case Symmetrix VMAX 10K overview RecoverPoint

More information

Nimble Storage + OpenStack 打 造 最 佳 企 業 專 屬 雲 端 平 台. Nimble Storage Brian Chen, Solution Architect Jay Wang, Principal Software Engineer

Nimble Storage + OpenStack 打 造 最 佳 企 業 專 屬 雲 端 平 台. Nimble Storage Brian Chen, Solution Architect Jay Wang, Principal Software Engineer Nimble Storage + OpenStack 打 造 最 佳 企 業 專 屬 雲 端 平 台 Nimble Storage Brian Chen, Solution Architect Jay Wang, Principal Software Engineer Redefining the Storage Market with Adaptive Flash Headquartered in

More information

Navisphere Quality of Service Manager (NQM) Applied Technology

Navisphere Quality of Service Manager (NQM) Applied Technology Applied Technology Abstract Navisphere Quality of Service Manager provides quality-of-service capabilities for CLARiiON storage systems. This white paper discusses the architecture of NQM and methods for

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

THE SUMMARY. ARKSERIES - pg. 3. ULTRASERIES - pg. 5. EXTREMESERIES - pg. 9

THE SUMMARY. ARKSERIES - pg. 3. ULTRASERIES - pg. 5. EXTREMESERIES - pg. 9 PRODUCT CATALOG THE SUMMARY ARKSERIES - pg. 3 ULTRASERIES - pg. 5 EXTREMESERIES - pg. 9 ARK SERIES THE HIGH DENSITY STORAGE FOR ARCHIVE AND BACKUP Unlimited scalability Painless Disaster Recovery The ARK

More information

EMC Symmetrix V-Max with Veritas Storage Foundation

EMC Symmetrix V-Max with Veritas Storage Foundation EMC Symmetrix V-Max with Veritas Storage Foundation Applied Technology Abstract This white paper details the benefits of deploying EMC Symmetrix V-Max Virtual Provisioning and Veritas Storage Foundation

More information

EMC INFRASTRUCTURE FOR VMWARE CLOUD ENVIRONMENTS

EMC INFRASTRUCTURE FOR VMWARE CLOUD ENVIRONMENTS White Paper EMC INFRASTRUCTURE FOR VMWARE CLOUD ENVIRONMENTS Simplified storage management with FAST VP Remote replication with assured performance Simplified storage provisioning with EMC Unisphere for

More information

Application Workload Control Using Host I/O Limits for SQL Server on EMC Symmetrix VMAX

Application Workload Control Using Host I/O Limits for SQL Server on EMC Symmetrix VMAX WHITE PAPER Application Workload Control Using Host I/O Limits for Server on EMC Symmetrix VMAX Abstract The Symmetrix VMAX is an ideal consolidation platform designed to be simple, cost effective and

More information

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper Contents Introduction... 3 Disclaimer... 3 Problem Statement... 3 Storage Definitions... 3 Testing Method... 3 Test

More information

EMC SYMMETRIX VMAX 10K

EMC SYMMETRIX VMAX 10K EMC SYMMETRIX VMAX 10K EMC Symmetrix VMAX 10K with the Enginuity operating environment delivers a true Tier-1 multi-controller, scale-out architecture with consolidation and efficiency for the enterprise.

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

CERNER EMR: OPTIMIZING IT INFRASTRUCTURES

CERNER EMR: OPTIMIZING IT INFRASTRUCTURES CERNER EMR: OPTIMIZING IT INFRASTRUCTURES Guidance for maximizing performance, availability and mobility of Cerner Millennium environments ABSTRACT Cerner Millennium EMR plays a critical role in delivering

More information

Overview: X5 Generation Database Machines

Overview: X5 Generation Database Machines Overview: X5 Generation Database Machines Spend Less by Doing More Spend Less by Paying Less Rob Kolb Exadata X5-2 Exadata X4-8 SuperCluster T5-8 SuperCluster M6-32 Big Memory Machine Oracle Exadata Database

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

Storage Tiering for Microsoft SQL Server and EMC Symmetrix VMAX with Enginuity 5874

Storage Tiering for Microsoft SQL Server and EMC Symmetrix VMAX with Enginuity 5874 Storage Tiering for Microsoft SQL Server and EMC Symmetrix VMAX with Enginuity 5874 Applied Technology Abstract This white paper examines the application of managing Microsoft SQL Server in a storage environment

More information

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled

More information

ORACLE 11g AND 12c DATABASE CONSOLIDATION AND WORKLOAD SCALABILITY WITH EMC XTREMIO 4.0

ORACLE 11g AND 12c DATABASE CONSOLIDATION AND WORKLOAD SCALABILITY WITH EMC XTREMIO 4.0 ORACLE 11g AND 12c DATABASE CONSOLIDATION AND WORKLOAD SCALABILITY WITH EMC XTREMIO 4.0 Consolidation Oracle Production/Test/Dev/Reporting workloads in physical and virtual environments Simplicity Easy

More information

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All

More information

Realizing the True Potential of Software-Defined Storage

Realizing the True Potential of Software-Defined Storage Realizing the True Potential of Software-Defined Storage Who should read this paper Technology leaders, architects, and application owners who are looking at transforming their organization s storage infrastructure

More information

REMOTE SITE RECOVERY OF ORACLE ENTERPRISE DATA WAREHOUSE USING EMC DATA DOMAIN

REMOTE SITE RECOVERY OF ORACLE ENTERPRISE DATA WAREHOUSE USING EMC DATA DOMAIN White Paper REMOTE SITE RECOVERY OF ORACLE ENTERPRISE DATA WAREHOUSE USING EMC DATA DOMAIN EMC SOLUTIONS GROUP Abstract This white paper describes how a 12 TB Oracle data warehouse was transported from

More information

SUN ORACLE EXADATA STORAGE SERVER

SUN ORACLE EXADATA STORAGE SERVER SUN ORACLE EXADATA STORAGE SERVER KEY FEATURES AND BENEFITS FEATURES 12 x 3.5 inch SAS or SATA disks 384 GB of Exadata Smart Flash Cache 2 Intel 2.53 Ghz quad-core processors 24 GB memory Dual InfiniBand

More information

Optimizing Storage for Better TCO in Oracle Environments. Part 1: Management INFOSTOR. Executive Brief

Optimizing Storage for Better TCO in Oracle Environments. Part 1: Management INFOSTOR. Executive Brief Optimizing Storage for Better TCO in Oracle Environments INFOSTOR Executive Brief a QuinStreet Excutive Brief. 2012 To the casual observer, and even to business decision makers who don t work in information

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and Data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 EXECUTIVE SUMMARY Microsoft Exchange Server is a disk-intensive application that requires high speed storage to deliver

More information

Performance Validation and Test Results for Microsoft Exchange Server 2010 Enabled by EMC CLARiiON CX4-960

Performance Validation and Test Results for Microsoft Exchange Server 2010 Enabled by EMC CLARiiON CX4-960 Performance Validation and Test Results for Microsoft Exchange Server 2010 Abstract The purpose of this white paper is to profile the performance of the EMC CLARiiON CX4-960 with Microsoft Exchange Server

More information

Microsoft Windows Server in a Flash

Microsoft Windows Server in a Flash Microsoft Windows Server in a Flash Combine Violin s enterprise-class storage with the ease and flexibility of Windows Storage Server in an integrated solution so you can achieve higher performance and

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise ESSENTIALS Easy-to-use, single volume, single file system architecture Highly scalable with

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

EMC VMAX3 FAMILY. Enterprise Data Services Platform For Mission Critical Hybrid Cloud And Hyper-Consolidation ESSENTIALS POWERFUL

EMC VMAX3 FAMILY. Enterprise Data Services Platform For Mission Critical Hybrid Cloud And Hyper-Consolidation ESSENTIALS POWERFUL EMC VMAX3 FAMILY Enterprise Data Services Platform For Mission Critical Hybrid Cloud And Hyper-Consolidation ESSENTIALS Achieve predictable performance at massive scale for extreme-growth hybrid cloud

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

Oracle Exadata Database Machine for SAP Systems - Innovation Provided by SAP and Oracle for Joint Customers

Oracle Exadata Database Machine for SAP Systems - Innovation Provided by SAP and Oracle for Joint Customers Oracle Exadata Database Machine for SAP Systems - Innovation Provided by SAP and Oracle for Joint Customers Masood Ahmed EMEA Infrastructure Solutions Oracle/SAP Relationship Overview First SAP R/3 release

More information

Technical Paper. Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage

Technical Paper. Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage Technical Paper Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage Paper Title Table of Contents Introduction... 1 BRIEF OVERVIEW OF VMAX ARCHITECTURE... 1 PHYSICAL STORAGE DISK TYPES, FA PORTS,

More information

EMC DATA DOMAIN OPERATING SYSTEM

EMC DATA DOMAIN OPERATING SYSTEM ESSENTIALS HIGH-SPEED, SCALABLE DEDUPLICATION Up to 58.7 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability DATA INVULNERABILITY ARCHITECTURE Inline write/read

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead

More information

EMC Data Domain Boost for Oracle Recovery Manager (RMAN)

EMC Data Domain Boost for Oracle Recovery Manager (RMAN) White Paper EMC Data Domain Boost for Oracle Recovery Manager (RMAN) Abstract EMC delivers Database Administrators (DBAs) complete control of Oracle backup, recovery, and offsite disaster recovery with

More information

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) IntelliMagic, Inc. 558 Silicon Drive Ste 101 Southlake, Texas 76092 USA Tel: 214-432-7920

More information

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating

More information

EMC NETWORKER SNAPSHOT MANAGEMENT

EMC NETWORKER SNAPSHOT MANAGEMENT White Paper Abstract This white paper describes the benefits of NetWorker Snapshot Management for EMC Arrays. It also explains the value of using EMC NetWorker for snapshots and backup. June 2013 Copyright

More information

Simplified Management With Hitachi Command Suite. By Hitachi Data Systems

Simplified Management With Hitachi Command Suite. By Hitachi Data Systems Simplified Management With Hitachi Command Suite By Hitachi Data Systems April 2015 Contents Executive Summary... 2 Introduction... 3 Hitachi Command Suite v8: Key Highlights... 4 Global Storage Virtualization

More information

EqualLogic PS Series Load Balancers and Tiering, a Look Under the Covers. Keith Swindell Dell Storage Product Planning Manager

EqualLogic PS Series Load Balancers and Tiering, a Look Under the Covers. Keith Swindell Dell Storage Product Planning Manager EqualLogic PS Series Load Balancers and Tiering, a Look Under the Covers Keith Swindell Dell Storage Product Planning Manager Topics Guiding principles Network load balancing MPIO Capacity load balancing

More information

SMB Direct for SQL Server and Private Cloud

SMB Direct for SQL Server and Private Cloud SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server

More information

EMC DATA DOMAIN OPERATING SYSTEM

EMC DATA DOMAIN OPERATING SYSTEM EMC DATA DOMAIN OPERATING SYSTEM Powering EMC Protection Storage ESSENTIALS High-Speed, Scalable Deduplication Up to 58.7 TB/hr performance Reduces requirements for backup storage by 10 to 30x and archive

More information

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise REFERENCE ARCHITECTURE PernixData FVP Software and Splunk Enterprise 1 Table of Contents Executive Summary.... 3 Solution Overview.... 4 Hardware Components.... 5 Server and Network... 5 Storage.... 5

More information

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

Oracle FS1-2 Flash Storage System Software Features

Oracle FS1-2 Flash Storage System Software Features Oracle FS1-2 Flash Storage System Software Features Oracle FS1-2 flash storage system, Oracle s premier preferred SAN storage solution, delivers enterprise-grade storage capabilities that are optimized

More information

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark

More information

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel W h i t e p a p e r Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel Introduction The July 2011 launch of the VMware vsphere 5.0 which included the ESXi 5.0 hypervisor along with vcloud Director

More information

REDUCING DATABASE TOTAL COST OF OWNERSHIP WITH FLASH

REDUCING DATABASE TOTAL COST OF OWNERSHIP WITH FLASH REDUCING DATABASE TOTAL COST OF OWNERSHIP WITH FLASH MICHAEL GUTHRIE SYSTEMS ENGINEER EMC CORPORATION 1 SO YOU VE HEARD ABOUT XTREMIO INCREDIBLE PERFORMANCE WITH FLASH OPTIMIZED DATA SERVICES INLINE DATA

More information

EMC Disk Library with EMC Data Domain Deployment Scenario

EMC Disk Library with EMC Data Domain Deployment Scenario EMC Disk Library with EMC Data Domain Deployment Scenario Best Practices Planning Abstract This white paper is an overview of the EMC Disk Library with EMC Data Domain deduplication storage system deployment

More information

Microsoft SQL Server 2014 Fast Track

Microsoft SQL Server 2014 Fast Track Microsoft SQL Server 2014 Fast Track 34-TB Certified Data Warehouse 103-TB Maximum User Data Tegile Systems Solution Review 2U Design: Featuring Tegile T3800 All-Flash Storage Array http:// www.tegile.com/solutiuons/sql

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Symmetrix V-Max with SRDF/CE, EMC Replication Manager, and Enterprise Flash Drives Reference Architecture Copyright 2009 EMC Corporation.

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information