Whitepaper September Flash Performance for Oracle RAC with PCIe Shared Storage. A Revolutionary Oracle RAC Architecture. by Estuate and HGST
|
|
- Clyde Ryan
- 7 years ago
- Views:
Transcription
1 Whitepaper September 2014 Flash Performance for Oracle RAC with PCIe Shared Storage A Revolutionary Oracle RAC Architecture by Estuate and HGST
2 Table of Contents Introduction 1 RAC Share Everything Architecture 1 RAC Cluster with HGST FlashMAX 3 ASM and Oracle Files 5 The Write Advantage 6 The Read Advantage 7 High Availability 8 Performance 8 Cost Savings 11 Conclusion 11
3 Introduction Oracle Real Application Clusters (RAC) allows Oracle Database to run any packaged or custom application unchanged across a server pool. This provides the highest levels of availability and the most flexible scalability. To keep costs low, even the highest-end systems can be built out of standardized, commodity parts. Oracle Real Application Clusters provides a foundation for Oracle s Private Cloud Architecture. Oracle RAC technology enables a low-cost hardware platform to deliver the highest quality of service that rivals and exceeds the levels of availability and scalability achieved by more expensive mainframe SMP computers. By dramatically reducing administration costs and providing new levels of administration flexibility, Oracle RAC is the defacto architecture for enterprise database applications. Until now PCIe shared storage has been an oxymoron. But with HGST FlashMAX PCIe Solid State Disks (SSDs) coupled with HGST Virident Software, Oracle RAC systems can gain the performance benefits of PCIe while enjoying the reliability and scalability of a SAN. This revolutionary architecture features microsecond latency Infiniband networks between nodes to deliver low latency and high IOPS, while preserving the integrity of the Oracle RAC platform. HGST Virident Software features Virident HA for high availability replication between PCIe SSDs, Virident Share for storage pooling and Virident ClusterCache for server-side caching in front of existing SAN or DAS installations. This paper talks about this unique offering that makes it possible for Oracle RAC database to run on PCIe SSDs. The RAC cluster need not have any other shared storage, yet it still achieves the best performance, availability, scalability and cost savings for building out blocks of databases for the public or private cloud. The Oracle RAC database uses the extended cluster configuration and ASM preferred reads to achieve High Availability (HA) and get the best performance out of the suggested configuration. RAC Share Everything Architecture The Oracle RAC database has the Share Everything architecture. All data files, control files, SPFILEs, and redo log files in Oracle RAC environments must reside on cluster-aware shared disks so that all of the cluster database instances can access the shared storage. All database instances must see the same view of the cluster storage and the data files. The redo log files on the shared storage are used for instance recovery. PCIe SSDS are directly attached to the server. They have some very desirable features that benefit databases that require high performance and cost effective solutions. Until now, it was not possible to use server attached PCIe SSDs as RAC storage. In fact, it was not common even for single server databases, due to some of the limitations of existing PCIe SSDs. Before we discuss the revolutionary solution from HGST, we will quickly list out advantages and disadvantages of server attached PCIe SSDs to better understand why the proposed solution is revolutionary. PCI Express (PCIe or PCI-E) is a high-speed expansion card format that connects a computer with its attached peripherals. PCIe SSDs are the latest technology to be adopted by data centers to store and transfer data in a very efficient way. Since these SSDs are attached to a server directly, they re known as server attached PCIe SSDs. 1
4 The advantages of PCIe SSDs over array storage are: Performance: The biggest benefit of PCIe SSDs is increased performance. Not only does the PCIe interface have low latency for data transfer, it also bypasses any storage area networking to store or retrieve data. It is hence the fastest way of accessing data. It delivers microsecond latencies versus millisecond latencies for traditional SAN based storage. Energy Savings: Server-attached PCIe SSDs eliminate the need for additional storage servers, hence saving on power and cooling. Traditional storage solutions for high throughput, low latency and high IOPS need hundreds of hard disk drives (HDDs), Fibre Channel controllers and significant amounts of power and cooling. Space Savings: PCIe SSDs are compact and fit into the PCIe slot of a server. They eliminate the need for rack space, cooling, and power for storage servers. The disadvantage of traditional PCIe SSDs are: Capacity: Traditional PCIe enterprise class PCIe SSDs are often on the order of 100 GBs rather than terabytes in size. System architects can quickly run out of available PCIe slots in a server when sizing for capacity alone. High Availability: If a server goes down, the storage is down as well. Hence there is an absolute need for mirroring and other high availability software to ensure service continuity. No Sharing: Traditional PCIe SSDs do not have a Share Everything capability, so they simply cannot be used as storage for Oracle RAC databases. The unique combination of HGST FlashMAX hardware and HGST Virident Software eliminates the disadvantages of PCIe cards and provides a powerful solution for database architects. There are two features that make HGST PCIe SSDs very compelling for use as storage for Oracle databases. They are: Capacity: Currently HGST PCIe SSDs range from 550GB up to 4.8TB, and can be used as building blocks to reach any required capacity. Additional SSDs can be added to a single server, or across servers to increase both storage capacity and available CPU. Being compact with high capacity and high performance, there will be significant cost savings if you can eliminate expensive storage server altogether for your production systems. High Availability: HGST Virident Share provides PCIe SSDs sharing capability through an Infiniband (IB) network. Remote servers can access every server s FlashMAX PCIe SSDs as shared storage using HGST Virident Share. This meets the exact requirements of the Share Everything architecture for Oracle RAC database. HGST Virident Share makes it possible to fully use all the advantages of PCIe SSDs with no disadvantages. The combination of Virident Share and Oracle RAC provides a solution that takes advantage of the best of both worlds. This unique RAC architecture has several advantages that make it a compelling solution for building out a cloud platform on which to deploy Oracle RAC. 2
5 Oracle RAC on HGST FlashMAX PCIe SSDs HGST Virident software redefines the enterprise storage architecture. It brings server-side flash into enterprise applications that previously relied on high-cost proprietary SAN (Storage Area Networking) hardware. HGST Virident software builds on HGST Flash hardware and adds flash-aware network-level storage capabilities. Now you can use HGST FlashMAX PCIe SSDs to create Virtual Flash Storage Networks on the server side. It allows you to deploy applications on flash storage with shared block-level access and high availability across servers, while keeping the low-latency advantages of the PCIe-attached flash. The PCIe cards are directly attached to the database server. This configuration needs no network access to move data from SSD to server s memory and CPU to process the data. HGST s cards provide over a million IOPS with a disk-like capacity of up to 4.8 TB per SSD. What this means is, you have servers with effectively limitless I/O throughput. Applications which use HGST PCIe SSDs will often have the bottleneck moved from storage to CPU and memory. Note that most of the performance tuning historically involved reducing I/O contention and the bottleneck was mostly I/O. Below is the image of 2 node RAC cluster with HGST FlashMAX Storage only. Server 1 Server 2 SGA SGA PCI PCI FG1 Virident Share Disk-Group FG2 RAC Cluster with HGST FlashMAX ASM built in mirroring is used to efficiently mirror database files across servers (failure groups). HGST Flash- MAX II cards on each server are used as separate failure groups. Servers 1 and 2 have FlashMAX PCIe cards which are configured with ASM normal redundancy. ASM handles the consistency of data across the two PCIe SSDs by means of ASM mirroring. A Diskgroup consists of logical shareable disks across the server. In our case, each FlashMAX SSD is exposed as a disk. Each Diskgroup has two failure groups, FG1 and FG2, with one on each node. So all data in server1:fg1 is also available in server2:fg2. Oracle ASM ensures that at least two copies of data are present, one on each failure group. This way, should a server go down (i.e. a failure group), there is no impact on data availability. Each failure group can contain one or more FlashMAX PCIe SSDs. The data will be evenly distributed across all the SSDs in a single failgroup. 3
6 Both servers can access the failure groups for Reads and Writes. ASM Preferred Read is set up for node affinity. So all reads from server1 will only access data from FG1 and reads from server2 will access data from FG2 through the direct high speed PCIe bus which is attached to the server. Each failure group has mirror copy of data. Writes to server1 is written directly to FG1 through the PCIe bus and to FG2 through a high speed Infiniband interconnect. Network There are 2 networks between the servers. They are connected through Infiniband connectivity and run standard TCP over IB protocol and IB RDMA. The Infiniband network provides microsecond latencies and gigabytes of bandwidth. RAC Interconnect: Oracle Clusterware requires that you connect the nodes in the cluster to a private network by way of a private interconnect. The private interconnect is a separate network that you configure between cluster nodes. This interconnect should be a private interconnect, meaning it is not accessible to nodes that are not members of the cluster. They are used as part of the RAC cluster to share and transfer data from memory to memory of the servers. In this architecture, for the highest performance, this interconnect is Infiniband. Virident Share: HGST Virident Share works similarly to RAC interconnect. It too will run on a separate private network. In traditional cluster storage, all the servers are connected to a storage network and one big storage rack connected to the hub. Virident Share on the other hand is software that allows PCIe SSDs of all servers to be viewed as shared disks on every server. All servers are connected together through high speed Ethernet or Infiniband networks. Since PCIe SSD storage is attached to the server, there is no requirement to connect to any other storage. In this architecture, the shared Infiniband fabric utilized by Virident Share has much lower latency and higher bandwidth than traditional storage network that usually runs on Fibre Channel. Below figure shows the network connection to storage on traditional SAN storage and FlashMAX. Read to Traditional Storage Read to HGST FlashMAX Server 1 Server 2 Server 1 Server 2 SGA SGA SGA SGA R_FG1 or R_FG2 Storage Network R_FG1 Disk-Group FG1 FG2 Disk-Group Virident Share Infiniband Network FG1 FG2 4
7 ASM and Oracle Files Since Oracle release 11gR2, all Oracle related files can reside in the ASM. Prior to 11gR2 release, Oracle voting disks and OCR files which are part of the cluster earlier had to be kept outside ASM to help start and stop the RAC cluster. However, Oracle introduced a new process that makes it possible to have all files within ASM. You can still have the database configured with voting disks and OCR files outside ASM, but having all Oracle managed files within ASM is discussed here. Note that a two-node RAC cluster will need a quorum-voting disk. See Appendix for more information about this ASM functionality. The Architecture Advantage Oracle RAC on Virident Share has some fundamental differences in how the data is stored and retrieved. This design has advantages over traditional RAC storage. They are: Unlimited IOPS with linear scalability Less I/O traffic through network Unlimited IOPS with Linear Scalability With this architecture, it is possible to have unlimited IOPS and linear scalability of IOPS due to low latency I/O of FlashMAX SSDs and Virident Share network. In a simulated test, a single server was able to achieve 80,000 8K IOPS before it ran out of CPU resources. Adding another node to the test doubled it to 160,000 with both nodes pegged due to CPU. Now if additional RAC nodes were added to this setup, you are adding more CPU resources in addition to storage. These CPU resources could be used to access SSDs through Virident Share network or through the PCIe bus depending on where the data resides. This ability to read/write more blocks through additional nodes would mean more IOPS. It is possible to simulate a scenario where all the IOPS across all the nodes access a single FlashMAX SSD on a server and max out the IOPS capacity on a single SSD. Practically this is unlikely to happen as data is distributed across all the RAC nodes and further distributed across diskgroups and SSDs within a single failure group. Note that when you double the FlashMAX SSDs within a single failure group, the maximum IOPS limit would double. The point here is with Virident Share network, the total IOPS capacity of the RAC server is not limited by the IOPS limits of a single SSD. This is very different from closed SAN storage architectures where the total IOPS of a storage system is described in the specification of the H/W and systems with higher IOPS will be more expensive. With HGST Virident Share network, you get low latency IO with IOPS increasing as more servers are added. This is a revolutionary change and it liberates the DBA and architects from working under the confines of storage performance and capacity limitations. 5
8 The Write Advantage (50% less I/O traffic through the network) In the above configuration, all the DB files are stored in an ASM diskgroup. The files are striped across the disks in each fail group and mirrored across failure groups. Every file on server 1 is present on server 2. When ASM writes to server1:fg1, it also writes to server2:fg2. This write data is sent to server2:fg2 through the Share private network. The interconnect setup in our case is over Infiniband network which has very low latency and high bandwidth. In a typical RAC configuration with shared storage, all servers are connected to the shared storage (SAN) through network switches. This would mean all the writes will have to pass through the network to be written to disks. In case of the above configuration, one of the writes is done directly on local PCIe storage and the other write passes over the network. The advantage here is the reduction in write I/O traffic from server to the diskgroup by 50% than a typical SAN storage. The writes are synchronous and an acknowledgment is returned to database instance after ensuring that all I/O are done. So the performance of a write will be equal to the time of the slowest disk. In our case, you will have similar I/O performance as any other shared storage connected through high speed network. Due to low write latencies of the FlashMAX and the low network latency of the Infiniband network compared to traditional SAN storage, the performance is likely to be much better. In addition to this, since the amount of data you pass through the network is half of typical shared storage, you can push a lot more writes through the network before you get any network bottleneck. Infiniband networking has become an industry standard and is also used as a key component in high performance Oracle architectures to reduce latency and increase bandwidth for RAC interconnect. By utilizing it also for Virident Share, even higher performance is possible. The figure below illustrates the path of a single Write I/O from server 1 in traditional RAC storage and Flash- MAX. Write to Traditional Storage Write to HGST FlashMAX Server 1 Server 2 Server 1 Server 2 SGA SGA SGA SGA w_fg1 Storage Network w_fg1 w_fg2 Disk-Group FG1 Virident Share Disk-Group FG2 Infiniband Network w_fg2 FG1 FG2 6
9 The Read Advantage (100% reads from local PCIe, zero traffic through Virident Share network) When you configure ASM failure groups, it is more efficient for server1 to read from FG1 that is directly attached to the server than read data from the remote server through the network. Using preferred read failure groups configuration of ASM, all physical reads from the database are read from local PCIe cards. What this means is that you have 100% reduction of network traffic for data reads. This means that there is zero network latency to read data and access time for PCIe cards are in microseconds and not milliseconds which is typical of SAN storage. Below figure illustrates single block read from server1. Read to Traditional Storage Read to HGST FlashMAX Server 1 Server 2 Server 1 Server 2 SGA SGA SGA SGA R_FG1 or R_FG2 Storage Network R_FG1 Disk-Group FG1 FG2 Disk-Group Virident Share Infiniband Network FG1 FG2 The advantages of this solution architecture are: High Availability Performance Scalability Cost Savings 7
10 High Availability One of the main reasons why traditional PCIe cards cannot be used in Oracle RAC configuration is that they do not support Share Everything architectures. Now with HGST Virident Share, the biggest technical hurdle is eliminated. All data in server 1 is available in server 2. If one of the servers goes down, the application can still access the database and cluster through the surviving nodes. In our configuration, failure of a server will also mean failure of the attached storage and the ASM fail-group. This is why you should not have more than one failure group of a diskgroup per server. The OCR is striped and mirrored similar to ASM Database Files. So we can now leverage the mirroring capabilities of ASM to mirror cluster files. The Voting Disk is not striped but put as a whole on ASM Disks if we use a redundancy of normal on the diskgroup, 3 Voting Files are required, each on one ASM Disk. In addition to this, there is a RAC cluster HA requirement that all servers need access to half or more of the voting disks in order to be part of the cluster. If the server loses access to 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster. For normal redundancy you can have only 3 voting disks in a single diskgroup. Therefore, you need to have at least 3 failure groups to store the voting disks: In a 2 node RAC cluster as above, we can only have 2 failure groups (voting disks cannot span multiple disk groups). In such cases the new quorum failure group concept is introduced. A quorum failure group is a special type of failure group and stores only the voting disk. The quorum failure group does not contain user data and are not considered when determining redundancy requirements. A quorum failure group can be created from standard NFS based or iscsi disk that the RAC cluster can access. The space you need for voting disk is typically less than 300MB. The installation document shows the details of how this is configured. Performance There are two major types of workload a database experiences: On-Line Transaction Processing (OLTP): The performance of OLTP systems is usually less physical reads and writes, but more focused on latency and IOPS. Data Warehouse (DW): The performance characteristic of DW application is measured in terms of throughput. This workload will have high physical reads to run queries on large amounts of data and batch writes to update the warehouse system. OLTP Performance Physical Reads: In an OLTP system, the amount of physical reads is dependent on the size of SGA. The more DRAM available, the less likely many physical reads/sec will be needed. Even the busiest OTLP (like TPC-C) benchmarks often do not need many physical reads. To give you an idea, the busiest OTLP systems with thousands of transactions a second might need less than 25,000 8k block size I/O per second. The FlashMAX SSDs tested are able to run in a simulated environment with more than 80,000 physical reads per second before CPU resources were exhausted. The physical reads were doubled to over 160,000 8k block reads when the test was run against both nodes. You can conclude that reads scale very well as you add nodes in the setup. By setting up ASM preferred reads, the data is directly read through the PCIe bus when the data required is in the local FlashMAX SSDs of the diskgroup. In such cases no reads pass through the Virident 8
11 Share network. Even if one node requires a block from the other node s SGA, the data is transferred through RAC interconnect and is considered a logical read. If the data required is not available in the local FlashMAX SSDs, data can be read through the Virident Share network. The low latency of FlashMAX and the Infiniband network is in microseconds unlike traditional SAN storage, which has latency in milliseconds. As described earlier, when more FlashMAX SSDs or servers are added to the cluster, the low latency IOPS increases with no upper limit. Physical Writes: The physical writes are written to the local SSDs and the remote mirror through IB network. Writing over IB will take only a few microseconds longer than a local write. This is in contrast to the many milliseconds common in a storage network like Fibre Channel. The reason SSDs are popular is because of zero seek time, low latency. Chances are you will run out of CPU resources on the server before you can get any bottleneck on the system. Note that keeping OLTP database files and online redo log files on the same disk will not show any I/O spikes or difference in I/O latency since the bottleneck is not on the SSDs. Network Bottleneck: One might experience a network bottleneck on the IB cable connecting to the storage array when several RAC servers are performing I/O. However, as mentioned above, since the amount of data that needs to pass through the network is half for writes and zero for reads, you can put more than double the load before you reach any bottleneck compared to a typical SAN storage. More so, the IB interconnect used provides over 5GB/s of per-link bandwidth so only in the absolute largest of systems will this become a true bottleneck. CPU Bottleneck: The CPU is either busy doing work or waiting for I/O. Since the latency for I/O is in microseconds, the CPU is going to be a lot busier doing work with FlashMAX SSDs than when connected to SAN. The FlashMAX SSDs give effectively unlimited IOPS and microsecond latency, so the system is more likely going to have a CPU bottleneck before hitting any I/O bottleneck. In this case, you will be able to put a lot more workload on the system and get better TCO for the investment. Data Warehouse Performance The data warehouse performance characteristics are high physical reads and high batch writes. To speed up reads, it is typical for data warehouse queries to run using parallel queries. The query slave processes run in parallel and work on a subset of the data. The performance of the end query is highly dependent on the latency of I/O and degree of parallelism. As all reads are 100% from the local SSDs, the latency of I/O is very low. In addition to this, you will not observe any spike in latency when you run multiple concurrent parallel queries at the same time due to the ability of PCIe SSDs to provide low latency IOPS. Oracle RAC parallel query can spawn slaves across servers and do the work on all servers and then return the processed data to the query coordinator. In our configuration, each parallel slave will work on local mirror copy and return data to the query-coordinator. The batch write performance depends on the write latency, write IOPS and network latency. Having the database redo logs and data files on PCIe cards is going to have a significant impact on the latency due to the nature of SSDs. They have zero seek time compared to HDDs whose performance can vary depending on the load on the storage server. In a busy system the bottleneck is likely shift from I/O to CPU and memory. The performance of any database is always better when data is processed close to where it resides. In this architecture, data resides in the same physical enclosure where the processing is done. This greatly improves performance on regular RAC clusters. 9
12 Scalability HR Reports ERP Server 1 Server 2 Server 3 Server 4 SGA SGA SGA SGA PCI PCI PCI PCI FG1 Virident Share FG2 FG1 Virident Share FG2 Disk-Group 1 Disk-Group 2 Application scalability issues arise mainly due to storage capacity, CPU, or memory constraints. The requirement to scale can arise when you run out of any of these. Note that with HGST FlashMAX SSDs, I/O sizing is often not required. I/O sizing is usually one of the biggest issues DBA needs to confront on when scaling applications. When you add servers to accommodate more workload, you need to increase the I/O infrastructure as well. This would mean more storage servers, more Fibre Channel controllers, controller caches, etc. Storage Scalability As mentioned earlier, FlashMAX PCIe SSDs currently boast a capacity of up to 4.8TB each. This is sufficient for most of the OLTP applications. If the servers have more PCIe slots, then you can simply add additional PCIe cards as ASM disks to the same diskgroup. Each additional FlashMAX SSD will add an additional 4.8TB to the diskgroup capacity. Note that you will need to add equal sized cards to the failure group as well. Another option is to have a tiered storage. You can keep active data on flash diskgroups and older less frequently used data in diskgroups on a SAN. Oracle 12c has new features like Automatic Data Optimization (ADO) that can automate the process. CPU/Memory Scalability If CPU or memory is the bottleneck, you will need to add more RAC nodes. RAC s core value proposition is its ability to scale out on low cost servers while maintaining excellent performance. 10
13 When you add additional nodes for consolidation of different applications, you can configure such that each application runs on its own disk group and configure the services such that the connection to the servers are only made to servers where you have a failure group you can access locally. In case additional nodes added do not have data locally, the application still will scale but will have some additional latency to access data on remote node for physical reads. This really may not be significant for OLTP systems as the amount of physical reads is dependent on memory as well. With additional servers, you are also increasing the SGA size as well and very likely the physical reads/second requirement will be reduced. Cost Savings The cost savings of this solution is a very important component that makes this architecture very compelling. The biggest cost savings arise from the fact that the CPU is busy doing work rather than waiting for I/O. Oracle RAC software is licensed per core, no matter how utilized that core maybe. By freeing the CPU from waiting for I/O, more processing can be done with fewer numbers of cores. Alternatively, consolidations of multiple databases onto a smaller set of servers can significantly reduce cost. To run an Oracle RAC database with comparable I/O performance on shared storage server, you would need multiple high end storage servers. In addition to the cost of these storage servers, you will certainly need additional system administrators to manage these storage systems. With FlashMAX SSDs attached to servers, this role of system administrator could easily be picked up by a DBA. There is only a one time setup to configure the SSDs and minimal ongoing maintenance. The energy consumption of these systems is significantly less than RAC on traditional storage servers with similar performance characteristics. The amount of rack space occupied by FlashMAX SSD storage systems is effectively zero as all the storage resides in the server case itself. This is a big cost savings in hosting and data centers environments where there is additional cost for rack space consumed by the servers. This solution can also be easily be implemented by smaller companies that need high end performance from a small energy efficient systems with no special data center in house. Conclusion The ability to share PCIe SSDs across servers using HGST Virident Software makes it possible to combine the advantage of Oracle RAC database with the advantages of PCIe SSDs to create a very compelling architecture on which to run all databases. 11
14 Appendix Just like database files, Oracle Cluster Registry (OCR) files are stored in an ASM disk group and therefore utilize the ASM disk group configuration with respect to redundancy. For example, a normal redundancy ASM disk group will hold a two-way-mirrored OCR. A failure of one disk in the disk group will not prevent access to the OCR. The Oracle Cluster Registry is the central repository for all the resources registered with Oracle Clusterware. It contains the profile, state, and ownership details of the resources. This includes both Oracle resources and user-defined application resources. Oracle resources include the node apps (VIP, ONS, GSD, and Listener) and database resources, such as database instances, and database services. Oracle resources are added to the OCR by tools such as DBCA, NETCA, and srvctl. Oracle only allows one OCR per disk group in order to protect against physical disk failures. When configuring Oracle Clusterware files on a production system, you could configure either normal or high redundancy ASM disk groups. You can, however, have up to five OCR disks in separate diskgroups. Voting files are not stored as standard ASM files. They follow the ASM disk group configuration with respect to redundancy, but are not managed as normal ASM files in the disk group. Each voting disk is placed on a specific disk in the disk group. Voting files must reside in ASM in a single disk group. With normal redundancy, you need to have 3 voting disks and with high redundancy you will need 5 voting disks. The files that are part of this ASM include Voting Disks, OCR files, Spfiles for ASM and the DB, Database files, Flashback files, redo log files, archive log files, rman backup files, export dump, password and wallet files. With ASM normal redundancy, Oracle will need 3 voting disks in an ASM diskgroup. The OCR file is stored similar to any other oracle database file. It is striped and mirrored across all the disks in diskgroup. We need to point it to 3 ASM failure group of a diskgroup. In addition to this, there is a RAC cluster feature that if you lose 2/3 voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster. In our two node RAC configuration, the loss of a node will also mean you will lose the storage attached to it. So make sure that you have only one Failure group per node for each diskgroup. In the case of a RAC cluster with 3 or more nodes, this is not an issue. You can point to 3 ASM disks (HGST SSDs) in each server of the same diskgroup to store the voting disks. Even if one of the server goes down, the surviving nodes can access 2/3 voting disks and hence will not get evicted. In a 2 node RAC cluster, if you store 2 voting disks on one server and one on the other server, then during the failure of a server with 2 voting disks, the surviving node cannot access 2/3 voting disks and the surviving node will get evicted. For such cases a new quorum failure group concept is introduced. A quorum failure group is a special type of failure group. Disks in this failure group do not contain user data and are not considered when determining redundancy requirements. The quorum failure group can be created from standard NFS based or iscsi disk that the RAC cluster can access. That way, when one of the nodes goes down, the surviving node can access the quorum voting disk and voting disk on its storage. That will add to 2/3 voting disks and the node will not get evicted. The issue while configuring a quorum diskgroup is, when we install a RAC using Oracle Universal Installer, the default installer does not have an option to create a quorum disk while installing using the GUI installer in both 11gr2 and 12c. The workaround for this situation is to first create extra ASM disk (by partitioning the Virident Share disk) and install the RAC cluster and ASM diskgroups. After that, you can manually move the voting disk to an nfs or iscsi destination. Quorum failure group can be created from standard NFS based or iscsi disk that the RAC cluster can access. The space you need for the voting disk typically is less than 300MB. 12
15 References Contact Information Estuate HGST, a Western Digital company 1183 Bordeaux Drive, Suite Yerba Buena Rd. Sunnyvale, CA San Jose, CA Phone: Phone: Fax: Fax: HGST,Inc., 3403 Yerba Buena Road, San Jose, CA USA, Produced in the United States 6/14, All rights reserved. ServerCache is a trademark of HGST, Inc. and its affiliates in the United States and/or other countries. HGST trademarks are intended and authorized for use only in countries and jurisdictions in which HGST has obtained the rights to use,market and advertise the brand. Contact HGST for additional information. HGST shall not be liable to third parties for unauthorized use of this document or unauthorized use of its trademarks. References in this publication to HGST s products, programs, or services do not imply that HGST intends to make these available in all countries in which it operates. The information provided does not constitute a warranty. Information is true as of the date of publication and is subject to change. Actual specifications for unique part numbers may vary. Please visit the Support section of our website, for additional information on product specifications. Photographs may show design models. 13 WP020-EN-US
Flash Performance for Oracle RAC with PCIe Shared Storage A Revolutionary Oracle RAC Architecture
Flash Performance for Oracle RAC with PCIe Shared Storage Authored by: Estuate & Virident HGST Table of Contents Introduction... 1 RAC Share Everything Architecture... 1 Oracle RAC on FlashMAX PCIe SSDs...
More informationBuilding a Flash Fabric
Introduction Storage Area Networks dominate today s enterprise data centers. These specialized networks use fibre channel switches and Host Bus Adapters (HBAs) to connect to storage arrays. With software,
More informationHow To Scale Myroster With Flash Memory From Hgst On A Flash Flash Flash Memory On A Slave Server
White Paper October 2014 Scaling MySQL Deployments Using HGST FlashMAX PCIe SSDs An HGST and Percona Collaborative Whitepaper Table of Contents Introduction The Challenge Read Workload Scaling...1 Write
More informationHGST Virident Solutions 2.0
Brochure HGST Virident Solutions 2.0 Software Modules HGST Virident Share: Shared access from multiple servers HGST Virident HA: Synchronous replication between servers HGST Virident ClusterCache: Clustered
More informationSolution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux
Solution Brief July 2014 All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Traditional SAN storage systems cannot keep up with growing application performance needs.
More informationSOLUTION BRIEF AUGUST 2015. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux
AUGUT 2015 All-Flash erver-ide torage for Oracle Real Application Clusters (RAC) on Oracle Linux Introduction Traditional AN storage systems cannot keep up with growing application performance needs. The
More informationPARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
More informationConverged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)
More informationEMC XtremSF: Delivering Next Generation Performance for Oracle Database
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
More informationAchieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
More informationBuilding a Scalable Storage with InfiniBand
WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5
More informationUltimate Guide to Oracle Storage
Ultimate Guide to Oracle Storage Presented by George Trujillo George.Trujillo@trubix.com George Trujillo Twenty two years IT experience with 19 years Oracle experience. Advanced database solutions such
More informationRunning Highly Available, High Performance Databases in a SAN-Free Environment
TECHNICAL BRIEF:........................................ Running Highly Available, High Performance Databases in a SAN-Free Environment Who should read this paper Architects, application owners and database
More informationBoost Database Performance with the Cisco UCS Storage Accelerator
Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to
More informationBest Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009
More informationPreview of Oracle Database 12c In-Memory Option. Copyright 2013, Oracle and/or its affiliates. All rights reserved.
Preview of Oracle Database 12c In-Memory Option 1 The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any
More informationInge Os Sales Consulting Manager Oracle Norway
Inge Os Sales Consulting Manager Oracle Norway Agenda Oracle Fusion Middelware Oracle Database 11GR2 Oracle Database Machine Oracle & Sun Agenda Oracle Fusion Middelware Oracle Database 11GR2 Oracle Database
More informationVirtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
More informationLeveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments
Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,
More informationThe Revival of Direct Attached Storage for Oracle Databases
The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to
More informationSUN ORACLE DATABASE MACHINE
SUN ORACLE DATABASE MACHINE FEATURES AND FACTS FEATURES From 1 to 8 database servers From 1 to 14 Sun Oracle Exadata Storage Servers Each Exadata Storage Server includes 384 GB of Exadata Smart Flash Cache
More informationSolving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
More informationCloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
More informationBest Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015
Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015 Table of Contents 0 Introduction 1 The Test Environment 1 Best
More informationOracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture Ron Weiss, Exadata Product Management Exadata Database Machine Best Platform to Run the
More informationHigh Availability Databases based on Oracle 10g RAC on Linux
High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database
More informationServer-i-zation of Storage Jeff Sosa Sr. Director of Product Management, Virident
Unconditional Performance Server-i-zation of Storage Jeff Sosa Sr. Director of Product Management, Virident www.virident.com Evolution of Application Infrastructure 1st MAINFRAME 2nd CLIENT-SERVER & WEB
More informationData Center Solutions
Data Center Solutions Systems, software and hardware solutions you can trust With over 25 years of storage innovation, SanDisk is a global flash technology leader. At SanDisk, we re expanding the possibilities
More informationExpress5800 Scalable Enterprise Server Reference Architecture. For NEC PCIe SSD Appliance for Microsoft SQL Server
Express5800 Scalable Enterprise Server Reference Architecture For NEC PCIe SSD Appliance for Microsoft SQL Server An appliance that significantly improves performance of enterprise systems and large-scale
More informationSUN ORACLE DATABASE MACHINE
SUN ORACLE DATABASE MACHINE FEATURES AND FACTS FEATURES From 2 to 8 database servers From 3 to 14 Sun Oracle Exadata Storage Servers Up to 5.3 TB of Exadata QDR (40 Gb/second) InfiniBand Switches Uncompressed
More informationMaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products
MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with
More informationEvaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
More informationSMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
More informationThe Flash-Transformed Financial Data Center. Jean S. Bozman Enterprise Solutions Manager, Enterprise Storage Solutions Corporation August 6, 2014
The Flash-Transformed Financial Data Center Jean S. Bozman Enterprise Solutions Manager, Enterprise Storage Solutions Corporation August 6, 2014 Forward-Looking Statements During our meeting today we will
More informationEMC XtremSF: Delivering Next Generation Storage Performance for SQL Server
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
More information2009 Oracle Corporation 1
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
More informationOracle Database Deployments with EMC CLARiiON AX4 Storage Systems
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
More informationPerformance Beyond PCI Express: Moving Storage to The Memory Bus A Technical Whitepaper
: Moving Storage to The Memory Bus A Technical Whitepaper By Stephen Foskett April 2014 2 Introduction In the quest to eliminate bottlenecks and improve system performance, the state of the art has continually
More informationData Center Solutions
Data Center Solutions Systems, software and hardware solutions you can trust With over 25 years of storage innovation, SanDisk is a global flash technology leader. At SanDisk, we re expanding the possibilities
More informationSoftware-defined Storage at the Speed of Flash
TECHNICAL BRIEF: SOFTWARE-DEFINED STORAGE AT THE SPEED OF... FLASH..................................... Intel SSD Data Center P3700 Series and Symantec Storage Foundation with Flexible Storage Sharing
More informationAn Oracle White Paper May 2011. Exadata Smart Flash Cache and the Oracle Exadata Database Machine
An Oracle White Paper May 2011 Exadata Smart Flash Cache and the Oracle Exadata Database Machine Exadata Smart Flash Cache... 2 Oracle Database 11g: The First Flash Optimized Database... 2 Exadata Smart
More informationOracle Acceleration with the SanDisk ION Accelerator Solution
WHITE PAPER Oracle Acceleration with the SanDisk ION Accelerator Solution 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Customer Results... 3 Flexible Deployment Options... 4
More informationSQL Server Virtualization
The Essential Guide to SQL Server Virtualization S p o n s o r e d b y Virtualization in the Enterprise Today most organizations understand the importance of implementing virtualization. Virtualization
More informationData Center Storage Solutions
Data Center Storage Solutions Enterprise software, appliance and hardware solutions you can trust When it comes to storage, most enterprises seek the same things: predictable performance, trusted reliability
More informationIncrease Database Performance by Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Flash Accelerator Card
Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Technology Paper Authored by Rick Stehno, Principal Database Engineer, Seagate Introduction Supporting high transaction
More informationPerformance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle
Performance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle Storage and Database Performance Benchware Performance Suite Release 8.5 (Build 131015) November 2013 Contents 1 System Configuration
More informationAn Oracle White Paper September 2010. Oracle Database Smart Flash Cache
An Oracle White Paper September 2010 Oracle Database Smart Flash Cache Introduction Oracle Database 11g Release 2 introduced a new database feature: Database Smart Flash Cache. This feature is available
More informationCartal-Rijsbergen Automotive Improves SQL Server Performance and I/O Database Access with OCZ s PCIe-based ZD-XL SQL Accelerator
enterprise Case Study Cartal-Rijsbergen Automotive Improves SQL Server Performance and I/O Database Access with OCZ s PCIe-based ZD-XL SQL Accelerator ZD-XL SQL Accelerator Significantly Boosts Data Warehousing,
More informationioscale: The Holy Grail for Hyperscale
ioscale: The Holy Grail for Hyperscale The New World of Hyperscale Hyperscale describes new cloud computing deployments where hundreds or thousands of distributed servers support millions of remote, often
More informationEMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
More informationIntel RAID SSD Cache Controller RCS25ZB040
SOLUTION Brief Intel RAID SSD Cache Controller RCS25ZB040 When Faster Matters Cost-Effective Intelligent RAID with Embedded High Performance Flash Intel RAID SSD Cache Controller RCS25ZB040 When Faster
More informationAgenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
More informationDelivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator
enterprise White Paper Delivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator Performance Test Results for Analytical (OLAP) and Transactional (OLTP) SQL Server 212 Loads Allon
More informationFrequently Asked Questions August 2013. s840z SAS SSD Enterprise-Class Solid-State Device. Frequently Asked Questions
August 2013 s840z SAS SSD Enterprise-Class Solid-State Device Frequently Asked Questions Frequently Asked Questions Q: In what applications do HGST s840z SAS SSDs benefit enterprise customers? A: s840z
More informationComprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha
More informationDELL s Oracle Database Advisor
DELL s Oracle Database Advisor Underlying Methodology A Dell Technical White Paper Database Solutions Engineering By Roger Lopez Phani MV Dell Product Group January 2010 THIS WHITE PAPER IS FOR INFORMATIONAL
More informationAchieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
More informationDirect NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
More informationFlash Memory Arrays Enabling the Virtualized Data Center. July 2010
Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,
More informationEssentials Guide CONSIDERATIONS FOR SELECTING ALL-FLASH STORAGE ARRAYS
Essentials Guide CONSIDERATIONS FOR SELECTING ALL-FLASH STORAGE ARRAYS M ost storage vendors now offer all-flash storage arrays, and many modern organizations recognize the need for these highperformance
More informationComparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
More informationEvaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
More informationCisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database
Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Built up on Cisco s big data common platform architecture (CPA), a
More informationThe Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays
The Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays Executive Summary Microsoft SQL has evolved beyond serving simple workgroups to a platform delivering sophisticated
More informationBenchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
More informationAn Oracle White Paper December 2013. Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine
An Oracle White Paper December 2013 Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine Flash Technology and the Exadata Database Machine... 2 Oracle Database 11g: The First Flash
More informationWHITE PAPER The Storage Holy Grail: Decoupling Performance from Capacity
WHITE PAPER The Storage Holy Grail: Decoupling Performance from Capacity Technical White Paper 1 The Role of a Flash Hypervisor in Today s Virtual Data Center Virtualization has been the biggest trend
More informationWITH BIGMEMORY WEBMETHODS. Introduction
WEBMETHODS WITH BIGMEMORY Guaranteed low latency for all data processing needs TABLE OF CONTENTS 1 Introduction 2 Key use cases for with webmethods 5 Using with webmethods 6 Next steps webmethods is the
More informationStarWind Virtual SAN for Microsoft SOFS
StarWind Virtual SAN for Microsoft SOFS Cutting down SMB and ROBO virtualization cost by using less hardware with Microsoft Scale-Out File Server (SOFS) By Greg Schulz Founder and Senior Advisory Analyst
More informationDiablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015
A Diablo Technologies Whitepaper Diablo and VMware TM powering SQL Server TM in Virtual SAN TM May 2015 Ricky Trigalo, Director for Virtualization Solutions Architecture, Diablo Technologies Daniel Beveridge,
More informationEMC VFCACHE ACCELERATES ORACLE
White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes
More informationZadara Storage Cloud A whitepaper. @ZadaraStorage
Zadara Storage Cloud A whitepaper @ZadaraStorage Zadara delivers two solutions to its customers: On- premises storage arrays Storage as a service from 31 locations globally (and counting) Some Zadara customers
More informationExadata Database Machine
Database Machine Extreme Extraordinary Exciting By Craig Moir of MyDBA March 2011 Exadata & Exalogic What is it? It is Hardware and Software engineered to work together It is Extreme Performance Application-to-Disk
More informationHDS UCP for Oracle key differentiators and why it should be considered. Computacenter insight following intensive benchmarking test
HDS UCP for Oracle key differentiators and why it should be considered Computacenter insight following intensive benchmarking test Background Converged Infrastructures are becoming a common sight in most
More informationAn Alternative Storage Solution for MapReduce. Eric Lomascolo Director, Solutions Marketing
An Alternative Storage Solution for MapReduce Eric Lomascolo Director, Solutions Marketing MapReduce Breaks the Problem Down Data Analysis Distributes processing work (Map) across compute nodes and accumulates
More informationOverview: X5 Generation Database Machines
Overview: X5 Generation Database Machines Spend Less by Doing More Spend Less by Paying Less Rob Kolb Exadata X5-2 Exadata X4-8 SuperCluster T5-8 SuperCluster M6-32 Big Memory Machine Oracle Exadata Database
More informationHP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
More informationAn Oracle White Paper September 2011. Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups
An Oracle White Paper September 2011 Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups Table of Contents Introduction... 3 Tape Backup Infrastructure Components... 4 Requirements...
More informationFrequently Asked Questions March 2013. s620 SATA SSD Enterprise-Class Solid-State Device. Frequently Asked Questions
March 2013 s620 SATA SSD Enterprise-Class Solid-State Device Frequently Asked Questions Frequently Asked Questions Q: What about advanced data protection? A: In mission-critical enterprise and datacenter
More informationIOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
More information3 Red Hat Enterprise Linux 6 Consolidation
Whitepaper Consolidation EXECUTIVE SUMMARY At this time of massive and disruptive technological changes where applications must be nimbly deployed on physical, virtual, and cloud infrastructure, Red Hat
More informationOracle Database In-Memory The Next Big Thing
Oracle Database In-Memory The Next Big Thing Maria Colgan Master Product Manager #DBIM12c Why is Oracle do this Oracle Database In-Memory Goals Real Time Analytics Accelerate Mixed Workload OLTP No Changes
More informationHow A V3 Appliance Employs Superior VDI Architecture to Reduce Latency and Increase Performance
How A V3 Appliance Employs Superior VDI Architecture to Reduce Latency and Increase Performance www. ipro-com.com/i t Contents Overview...3 Introduction...3 Understanding Latency...3 Network Latency...3
More informationEMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi
EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either
More informationFrequently Asked Questions March 2013. EnhanceIO SSD Cache Software. Frequently Asked Questions
March 2013 EnhanceIO SSD Cache Software Frequently Asked Questions Frequently Asked Questions Q: How do HGST SAS SSDs benefit enterprise customers? A: HGST s SAS SSD product family delivers ultra-high
More informationHyper ISE. Performance Driven Storage. XIO Storage. January 2013
Hyper ISE Performance Driven Storage January 2013 XIO Storage October 2011 Table of Contents Hyper ISE: Performance-Driven Storage... 3 The Hyper ISE Advantage... 4 CADP: Combining SSD and HDD Technologies...
More informationScala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
More informationFrequently Asked Questions December 2013. ZeusRAM Solid-State Drive. Frequently Asked Questions
December 2013 ZeusRAM Solid-State Drive Frequently Asked Questions Frequently Asked Questions Q: What is ZeusRAM? A: ZeusRAM is a low latency (under 23us) wear-resistant SAS SSD. It supports dual 6Gb SAS
More informationQLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics
QLogic 16Gb Gen 5 Fibre Channel for Database Assessment for Database and Business Analytics Using the information from databases and business analytics helps business-line managers to understand their
More informationEsri ArcGIS Server 10 for VMware Infrastructure
Esri ArcGIS Server 10 for VMware Infrastructure October 2011 DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction... 3 Esri ArcGIS Server 10 Overview.... 3 VMware Infrastructure
More informationRemoving Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
More informationPSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation
PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation 1. Overview of NEC PCIe SSD Appliance for Microsoft SQL Server Page 2 NEC Corporation
More informationMS Exchange Server Acceleration
White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba
More informationQuantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
More informationHP ProLiant DL580 Gen8 and HP LE PCIe Workload WHITE PAPER Accelerator 90TB Microsoft SQL Server Data Warehouse Fast Track Reference Architecture
WHITE PAPER HP ProLiant DL580 Gen8 and HP LE PCIe Workload WHITE PAPER Accelerator 90TB Microsoft SQL Server Data Warehouse Fast Track Reference Architecture Based on Microsoft SQL Server 2014 Data Warehouse
More informationHigh Performance SQL Server with Storage Center 6.4 All Flash Array
High Performance SQL Server with Storage Center 6.4 All Flash Array Dell Storage November 2013 A Dell Compellent Technical White Paper Revisions Date November 2013 Description Initial release THIS WHITE
More informationLSI MegaRAID FastPath Performance Evaluation in a Web Server Environment
LSI MegaRAID FastPath Performance Evaluation in a Web Server Environment Evaluation report prepared under contract with LSI Corporation Introduction Interest in solid-state storage (SSS) is high, and IT
More informationUsing VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can
More informationHigh Performance Server SAN using Micron M500DC SSDs and Sanbolic Software
High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software White Paper Overview The Micron M500DC SSD was designed after months of close work with major data center service providers and
More informationDIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
More information