IMPLEMENTATION GUIDE DECEMBER HGST Flash Pools Implementation Guide for MySQL Database
|
|
|
- Roderick Stafford
- 10 years ago
- Views:
Transcription
1 DECEMBER 2014 HGST Flash Pools Implementation Guide for MySQL Database
2 Tale of Contents 2 Introduction 2 HGST Software 2 HGST Space (logical volume) and HA Benefits 2 MySQL Challenges 3 HGST Flash Pool Solution for MySQL 3 Evolution from Master/Slave to Flash Pool 4 Database Snapshot for Backup, ETL and Other Offhost Process 5 Implementation 5 System Configuration 5 Four Servers: Supermicro X9DRT 5 Hardware and Software requirements 5 Software Requirements 5 Network Configuration 5 Summary of steps to Implement HGST Flash Pools for MySQL database 6 Disable SELinux and iptables 6 Install MySQL 6 Install and Enable CLVM on All Servers 6 Load the Drivers on All Servers in the Cluster 8 Cluster Configuration 8 Space Creation 8 Networking Selection 8 Creating the CLVM Space and the Mirror Space 9 Creating LVM Volumes for Each Master 10 MySQL Configuration and Database Creation 10 HGST MySQL Monitoring 11 Testing Failover of a Master to the Redundant Server 12 Database Snapshots for Off-host Processing 13 Cleanup 13 Performance Metrics 13 HDD Master/Slave Replication vs. FlashMAX Master/ Slave Replication 14 HGST Flash Pools Replication Compared to Master/ Slave SSD Replication 14 Summary 1
3 Introduction MySQL is one of the most popular and widely used open source databases. MySQL instances power many of the web s most demanding applications. Benefits of MySQL are its ease of use, security, inexpensiveness, speed, memory management, ACID compliance, scalability and multi-os support. HGST Software HGST Software is based on FlashMAX, a PCIe SSD which provides scalable infrastructure with very high performance while running on commodity servers. The software has a volume management feature called HGST Space, which provides aggregated block device from a pool of servers, with a volume-level mirroring feature providing transparent, synchronous replication of data to deliver High Availability. With Space you can cluster multiple FlashMAX devices and establish full mirroring between them. With 16 nodes, fully mirrored, using 4.8TB FlashMAX Capacity SSDs, you can have 38.4TB of Flash as a single, highly available pool. You can then use the new Graphical User Interface to carve that pool into volumes of any size and serve them up to applications as needed. All hosts in the cluster can see all volumes. Adding servers or devices is a snap and volumes can grow or shrink dynamically. HGST Space (logical volume) and HA Benefits Highest performance for data intensive workloads Pool Flash across entire cluster, break free from single server limitations Distributes IOs within the aggregated devices for maximum performance Space can be extended to scale to many TBs Automatic pausing of IOs during failures. Transparent replication and high availability via synchronous mirroring MySQL Challenges Although MySQL has been widely used by many web and enterprise application, it has key issues in terms of scalability, performance and high availability. In order to overcome these limitations, users try to split reads from writes by using a Master/Slave infrastructure and partitioning the data by means of sharding. Most users have these pairs to provide resiliency. Should the Master go down, a slave can be promoted and serving queries in seconds, with only minimal data loss. Other architectures have more than one slave and use them to offload read queries from the web tier. These read-only copies are needed because traditional spinning disk MySQL s performance just can t keep up with their application needs. Another common use for MySQL slaves is ETL or backup purposes. Dumping the database can cause disk and cache thrashing, so it s important not to run it on a Master that has critical SLAs to meet, so a slave is used for this purpose often. Finally, a slave can be pulled from a cluster and used to test schema changes, migrate to the development cluster, or other development uses. This greatly simplifies the developer and DBA s tasks since if things go wrong, the slave can be resynchronized to the Master easily. This leads into enormous hardware infrastructure and complex database management issues. The main issues are shard 1 (cust 1-999) Poor utilization of hardware resources like CPU and storage read from cache Master/Slave lag since replication is asynchronous Multiple shards having sets of slaves introduces manageability issue CAPEX and OPEX increases as cache number update of server increases MySQL Replication write Figure 1 HGST Space Cluster & Volume Manager Figure 2 Traditional Master/Slave Architecture shard 1 (cust 1-999) cache update MySQL Replication shard 2 (cust ) cache update MySQL Replication read from cache read from slave write read from cache write shard cache u MySQL Replicatio read from slave read from slave MySQL Sharding and Replication 2
4 HGST Flash Pool Solution for MySQL HGST believes that scale-out database architectures are ideal for SSDs. For this type of an environment, FlashMAX PCIe based SSDs have the greatest benefits. FlashMAX can deliver more performance per server than other storage technologies resulting in server consolidations that range from 2:1 to 40:1 depending upon the type of disk being replaced. HGST customers have even seen reductions of 3:1 in servers that were using lower-end SAS and SATA based SSDs, resulting in subsequent power and footprint expense drops of 3x while giving IOPS increases of 400%. Figure 3 HGST Flash Pools Redefines MySQL Master/Slave Approach Before After Multi-Function Server HGST Space with FlashMAX storage provides a unique solution for server consolidation in MySQL or other scale-out databases that use sharding. Using Space, replication can be set from multiple Masters to multiple virtual slaves using unique Space volumes. A separate Redundant Server can be added to the cluster. When it is needed, the Redundant Server can mount any of the replicated volumes from the Space pool and be used to ensure continued application availability. In the example shown above, you can see 8 servers going to 5, a reduction of 38% and significant TCO/ROI benefit. The main benefits of this solution are Improve on cluster performance capabilities Reduce management overhead Enhance server utilization and drive TCO Improve performance, even with fewer servers Evolution from Master/Slave to Flash Pool So let s actually step through how we get to a Flash Pool architecture from a typical Master/Slave configuration. Let s begin with a sharded database comprised of a series of Masters backed by a group of read Slaves. They are all running on HDD arrays, and replication is handled by standard MySQL replication, on a lagging transaction basis. All those read slaves are consuming too much space and power, so we replace them with PCIe Flash and reduce the number of shards needed accordingly. But we still need a Slave for each Master, mostly for high availability. Because the Master and Slave need to be identically configured, they both have PCIe Flash installed. Replication still lags, being transaction based. Dedicated replication pairs Inefficient server utilization Figure 4 HGST Flash Pools for Server Consolidation Shared, clustered Multi-Function server 8 servers to 5, 38% consolidation Fully mirrored Pool of Flash Any server to any volume FlashMAX reduces Shards and slaves 3
5 Finally, we consolidate even more by introducing a Flash Pool. In the diagram on the right the same number of Masters are present, they still have Flash, but the slaves are removed and consolidated into a single server. This is made possible because all Flash in the system is now made redundant and visible to all members of the cluster. Figure 5 Synchronous Mirroring for Server Consolidation It s conceptually the same as if the Flash inside the servers was made into a redundant SAN array, transparently, while preserving the access times and performance of local PCIe Flash. The redundant server, instead of running MySQL in normal operation, is only running a monitoring script. No need for setting up Master/Slave pairs as each MySQL server s data is replicated across the cluster at a disk block level. On a failure, the redundant server can connect itself automatically (via a scripting interface similar to mysqlfailover) to the data files of the failed Master and take over without losing a single transaction in the process. Figure 6 Snapshots for Recovery & Multi-Use HGST Flash Pool solution Database Snapshot for Backup, ETL and Other Off-host Process Database level snapshot on CLVM mirrors can be created using custom scripts provided by HGST. The snapshots are available on the redundant server to perform off-host processing, or on the same Master if needed. 4
6 Implementation Installation and Configuration Installation of cluster solution is very similar to installation of the base FlashMAX drivers. It will require root access, of course, and about 10 minutes to complete. The following is the configuration used to implement this solution. System Configuration Four Servers: Supermicro X9DRT Intel Xeon CPU E GHz Memory: 64GB Hard disk: 1 TB CentOS Gb Ethernet network (suggested 10Gb for better performance) Hardware and Software requirements 1 HGST FlashMAX device per server Minimum of three servers and the fourth server without FlashMAX device. Software Requirements For software requirements and compatible versions refer to HGST Solutions Release Notes 2.0 and HGST Solutions 2.0 Product Brief. Network Configuration HGST Space Software requires either an InfiniBand or Ethernet network connection before configuring the software. Please refer the HGST Solutions 2.0 User Guide for more details to configure networking. Summary of steps to Implement HGST Flash Pools for MySQL database Disable SELINUX and firewalls(iptables and ip6tables), or configure them appropriately Install and enable Cluster LVM (CLVM) daemon on all nodes Install MySQL binaries on all the nodes Install HGST Space on all four servers Install Flash pool script on the fourth server which will act as backup node for the other three nodes in the cluster. Configure Space on all the nodes. Create a Space Volume with HA to run MySQL instance on three nodes Start the vgc_mysql_failover script on the fourth node Configure and create Linux CLVM volume for the three nodes Create MySQL database on the nodes other than the backup node. Run YCSB test tool Run the vgc_mysql_prepare and vgc_mysql_snap to create mirror and break. Run the vgc_mysql_clone on the fourth node. Check the MySQL instance is up and running. Stop the MySQL instance on backup node. Reboot one of the server running MySQL instance, instance should failover to the backup node. Check if the backup node is running the Master Instance. Provide performance numbers. 5
7 Disable SELinux and iptables To disable selinux, edit the selinux config file and set SELINUX=disabled on all the servers. If desired it is also possible to configure SELinux to allow MySQL access to the appropriate files, but that is not covered in this document. servers) # vim /etc/selinux/config... SELINUX=disabled root@(all servers) # echo 0 >/selinux/enforce Space Software requires the ability to connect between servers via TCP/IP. To ensure this communication can occur without impediment we recommend completely disabling the firewalls by stopping iptables and ensuring it doesn t start up at boot time, as shown below. Again, if firewalls are required they may be enabled as long as the appropriate ports are opened for access. See the Solutions 2.0 User s Guide for more information. Install MySQL MySQL can be downloaded and installed from Oracle, Percona, or MariaDB. Make sure to install the same version on all nodes on the cluster including the redundant server. root@(all servers) # service iptables stop iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Unloading modules: [ OK ] root@(all servers) # chkconfig iptables off root@(all servers) # service ip6tables stop ip6tables: Flushing firewall rules: [ OK ] ip6tables: Setting chains to policy ACCEPT: filter [ OK ] ip6tables: Unloading modules: [ OK ] root@(all servers) # chkconfig ip6tables off Install and Enable CLVM on All Servers Install and enable CLVM on all servers. root@(all servers) # yum install y lvm2-cluster cmirror root@(all servers)# service clvmd start Starting clvmd: Activating VG(s): No volume groups found [ OK ] root@(all servers)# service cmirrord start Starting cmirrord: [ OK ] root@(all servers)# chkconfig clvmd on root@(all servers)# chkconfig cmirrord on Load the Drivers on All Servers in the Cluster Do `service vgcd start` to load drivers. The drivers are set by default to load at system boot time, so you could also simply reset the server. root@(all servers) # service vgcd start Loading kernel modules... [ OK ] Rescanning SW RAID volumes... [ OK ] Rescanning LVM volumes... [ OK ] Enabling swap devices... [ OK ] Rescanning mount points... [ OK ] 6
8 On all the servers, please do a vgc-monitor to check the status of the card. [root@all server]# vgc-monitor vgc-monitor: Cluster Solutions V6 Driver Uptime: 6 days 6:18 Card Name Num Partitions Card Type Status vgca 1 VIR-M2-LP-550-1B Good Partition Usable Capacity RAID FMC vgca0 555 GB enabled disabled Enable HGST Space feature fmc by running following vgc-config on all servers. This will require a low-level format of the cards and destroy any data present on them. [root@allservers]# vgc-config -p /dev/vgca0 -m maxperformance --enable-fmc vgc-config: Cluster Solutions V6 *** WARNING: this operation will erase ALL data on this drive, type <yes> to continue: yes *** Formatting partition. Please wait... *** Run vgc-monitor to make sure FMC feature is enabled: [root@all servers]# vgc-monitor vgc-monitor: Cluster Solutions V6 Driver Uptime: 6 days 7:24 Card Name Num Partitions Card Type Status vgca 1 VIR-M2-LP-550-1B Good Partition Usable Capacity RAID FMC vgca0 461 GB enabled enabled 7
9 Cluster Configuration The vgcclustermgr service should be running on at least N/2 servers on a given N servers on a cluster [root@tm17 ~]# vgc-cluster domain-add-node h tm18 DOMAIN_ADD_NODE Request Succeeded [root@tm17 ~]# vgc-cluster domain-add-node h tm19 DOMAIN_ADD_NODE Request Succeeded [root@tm17 ~]# vgc-cluster domain-add-node h tm20 DOMAIN_ADD_NODE Request Succeeded [root@tm17 ~]# vgc-cluster domain-list Host State Role tm17 Online Manager (Active) ====> will act as redundant node for this guide tm18 Online tm19 Online tm20 Online Space Creation The Space is the shared LUN that will be used to contain the CLVM volumes for MySQL to execute on. We need to create one CLVM Space large enough to contain the CLVM volumes for all of the MySQL instances, and another Mirror Space large enough to contain a snapshot/mirror of the largest MySQL volume. Networking Selection Spaces communicate across networks using Infiniband RDMA or IP for Ethernet. List all the available networks to use for space and determine which one is your 10Gb link: [root@tm17 ~]# vgc-cluster network-list Network Name Type Flags Description Net 1 (IPv4) IPv4 autoconfig /22 Net 2 (IPv6) IPv6 autoconfig fe80::/64 Net 3 (IPv4) IPv4 autoconfig /24 Creating the CLVM Space and the Mirror Space Create one CLVM Space large enough to contain the CLVM volumes for all of the MySQL instances. For example, if each MySQL database is 100GB, and 3 Masters are used, create a 300GB CLVM space. Specify each server as a -S storage host, and a -A application host. Specify the -N network using the name shown above (be sure to use appropriate quotations around the network name). Also, ensure that the --redundancy value is 1 to guarantee availability of the data in case of server loss. [root@tm17 ~]# vgc-cluster vspace-create -n shard1 -N Net 3 (IPv4) -S tm18 -S tm19 -S tm20 -A tm18 -A tm19 -A tm20 -s redundancy 1 8
10 Create another Mirror Space large enough to contain a snapshot/ mirror of the largest MySQL volume. For example, using the same example sizes above, this should be a 100GB space as it s the size of an individual MySQL instance. Use the same name as the prior space and add a postfix of _m1 to identify it as the Mirror Space. [root@tm17 ~]# vgc-cluster vspace-create -n shard1_m1 -N Net 3 (IPv4) -S tm18 -S tm19 -S tm20 -A tm18 -A tm19 -A tm20 -s redundancy 1 Creating LVM Volumes for Each Master Create LVM volume using the Space volume, one for each Master. [root@tm18 ~]# pvcreate /dev/shard1 /dev/shard1_m1 Physical volume /dev/shard1 successfully created Physical volume /dev/shard1_m1 successfully created [root@tm18 ~]# vgcreate mysql_vg1 /dev/shard1 /dev/shard1_m1 Volume group mysql_vg1 successfully created [root@tm18 ~]# lvcreate -L 100 -n shard1_vol mysql_vg1 Logical volume shard1_vol created [root@tm18 ~]# lvcreate -L 100 -n shard2_vol mysql_vg1 Logical volume shard2_vol created [root@tm18 ~]# lvcreate -L 100 -n shard3_vol mysql_vg1 Logical volume shard3_vol created Create and mount file systems for each Master Once the LVM volumes are created they can be used as if they were local storage in each of the servers. The usual file system creation, fstab creation, etc. should be performed on each Master in the cluster. Create a file system using any non-cluster file system (XFS, EXT4, etc.). Even though this is a clustered redundancy architecture, each server is accessing its own private LUN and no special clustered file systems are required. [root@tm18 ~]# mkfs -t ext4 /dev/mysql_vg1/shard1_vol mke2fs (17-May-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks inodes, blocks 2457 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks= block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 24577, Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 33 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. 9
11 Mount the filesystem and give proper permission to the mount point. ~]# mount -t ext4 /dev/mysql_vg1/shard1_vol /data ~]# chmod -R 755 /data Edir /etc/fstab to enable auto mount of the lvm volume for each Master instance. /dev/mapper/mysql_vg1/shard1_vol /data ext4 defaults 0,0 MySQL Configuration and Database Creation Edit the /etc/my.cnf to point mysql data directory on the mounted file system on each Master. [mysqld] datadir=/data/mysql Create MySQL database using the mysql_install_db script or any custom script, start MySQL daemon and use mysqladmin CLI to generate password for the database. Also copy the /etc/my.cnf to redundant server with the shard name embedded into it. ~]# mysql_install_db --defaults-file=/etc/my.cnf ~]# mysqld --defaults-file=/etc/my.cnf --user=root & ~]# mysqladmin -u root password test ~]#scp /etc/my.cnf HGST MySQL Monitoring Start the HGST MySQL monitoring tool for failover on the Redundant Server. server]# chkconfig mysqld off Copy the vgc_mysql_failovr script to /usr/bin directory. On the redundant server disable the MySQL auto startup 10
12 The MySQL monitoring script needs to be configured using /etc/ vgc/mysql_failover.conf #List of server running the shards Master=tm18,tm19,tm20 #interval to check Master is alive Interval=10 #pre-fail script to be executed, script to make all the #client refer to the new host where new Master is running pre-fail= #post-fail script to be executed Post-fail= #lvm volume name of the Master #Mount point of the Master Mount_point= #daemon, background, none Startup=daemon #log type debug,info Log=debug server]# chkconfig mysqld off Testing Failover of a Master to the Redundant Server At this point the cluster should be operational and all databases running. We ll test failover by powering a server down and ensuring the Redundant Server starts the MySQL service up automatically. Shutdown the server running shard1 database to test failover [root@tm18 ~]# shutdown -h 0 Broadcast message from root@tm18 (/dev/pts/0) at 23:17... The system is going down for halt NOW! 11
13 The backup server running the monitoring script will detect the failure and will start the shard1 instance At the beginning we enabled the monitoring script: ~]# python vgc_mysql_failover NOTICE: python version check complete Master TM18 shut down NOTICE: Block device is visible NOTICE: Volume is visible NOTICE: Volume shard1_vol mounted on /data NOTICE: Fail-over of correct Master tm18 root :44 pts/0 00:00:00 mysqld --datadir=/ data/mysql NOTICE: Fail-over completed. Database Snapshots for Off-host Processing Snapshots can be taken of the Master LVM volumes using a special sequence of commands. Do not attempt to use the standard LVM snapshotting capabilities, as they do not function and return errors when running under CLVM. [root@tm18 ~]# bash vgc_mysql_prepare.sh /data NOTICE: Volume is not Mirrored Do you want to mirror the volume [y/n]?y volume mysql_shard1 mirroring completed successfully [root@tm18 ~]# Prepare the database volume to take a CLVM snapshot, and generate the snapshot. Because this snapshot is actually a full LVM mirror underneath, the time it takes will be longer than the lightweight LVM snapshots you may be used to, and a runtime of around 30 seconds per GB to generate the snapshot is expected. The snapshot should be initiated from the Master being snapshotted. Once completed, break the snapshot before mounting it on the same or another server. Again, this operation needs to run on the Master being snapshotted. [root@tm18 ~]# bash vgc_mysql_snap.sh /data Logical volume mysql_shard1 converted. Mirror break of volume mysql_shard1 completed successfully 12
14 Start the database on the redundant server using the snapshot volume using the helper scripts provided: ~]# bash vgc_mysql_clonedb.sh /etc/my.cnf NOTICE: MySQL is not running! NOTICE: MySQL clone instance started ~]# ps -ef grep mysqld mysql :00 pts/0 00:00:00 mysqld --datadir=/ data/mysql root :00 pts/0 00:00:00 grep mysqld Cleanup After using the snapshot for any off-host processing activity, following steps to be performed for cleanup a. stop the MySQL clone instance running on the snapshot using mysqladmin or kill command [root@tm18 ~]# lvremove mysqlvg/shard1_mirror Do you really want to remove active logical volume t1v_mirror? [y/n]: y Logical volume t1v_mirror successfully removed b. unmount the filesystem c. remove the lvm volume After this operation use the prepare command to create mirror for any of the MySQL Master database as required. Performance Metrics We performed a series of MySQL benchmarks on the test configuration shown to demonstrate the performance of the HGST FlashMAX and the superior speed of HGST Flash Pools versus traditional replication. Figure 7 HDDs vs. SSDs for Replication Transactions/Seconds 40.5 X 40 X HDD Master/Slave Replication vs. FlashMAX Master/Slave Replication HGST Flash Pools deliver dramatically superior performance compared to an HDD-based array. This first test compares MySQL standard Master/Slave replication under a mysqlslap workload. The servers used in this example demonstrated over 40x higher number of transactions per minute than a HDD array, demonstrating the speed and consolidation possibilities of moving from spinning media to FlashMAX. 30 X 20 X 10 X 0 X 1 X MySQL Master/Slave HGST Flash Pool Master/Slave 13
15 HGST Flash Pools Replication Compared to Master/Slave SSD Replication For this test we took identical server pairs running fully on HGST FlashMAX and tried using standard MySQL Master/Slave replication to Flash Pool replication under the same mysqlslap workload. Figure 8 Generic PCie SSDs vs. HGST FlashMAX SSDs for Replication Transactions/Seconds 2.0 X 1.6 X 1.5 X HGST Flash Pools achieved over 60% higher performance, with only about half the number of servers. In addition, since the replication is synchronous and at the block-level, we can assure much more granular recovery in the event of a Master failure 1.0 X.5 X 1 X Summary HGST Flash Pools for MySQL removes the administrative complexity of managing Master/Slave cluster architecture to a more simple and scalable architecture making MySQL a low latency high performance database. 0 X MySQL Flash Master/Slave Replication HGST Flash Pool Replication Putting PCIe Flash to work into your MySQL environment will yield immediate performance benefits, but there s more. Since Flash is so much faster and lower latency, you can consolidate servers by up to 38%, shifting CAPEX and OPEX expenses to profitable business initiatives. With HGST Flash Pools, you can eliminate the barriers that have persisted with conventional MySQL Master/Slave architecture with a multi-function server. As a result you ll enhance data availability and achieve a significant reduction in capital expense for hardware, software and maintenance. The impact on data center energy costs will be enormous, setting the stage for future growth initiatives. Finally, because HGST Flash Pools use synchronous mirroring, you guarantee replication transactions integrity resulting in a High Availability MySQL cluster with no replication lag HGST, Inc., 3403 Yerba Buena Road, San Jose, CA USA. Produced in the United States 11/14, revised 8/15. All rights reserved. FlashMAX is a registered trademark of HGST, Inc. and its affiliates in the United States and/or other countries. HGST trademarks are intended and authorized for use only in countries and jurisdictions in which HGST has obtained the rights to use, market and advertise the brand. Contact HGST for additional information. HGST shall not be liable to third parties for unauthorized use of this document or unauthorized use of its trademarks. All other trademarks are the property of their respective owners. References in this publication to HGST s products, programs, or services do not imply that HGST intends to make these available in all countries in which it operates. This presentation is presented for information purposes only and does not constitute a warranty. Actual results may vary depending on a number of factors. The user is responsible for ensuring the suitability of any proposed solution for its particular purpose. IG01-EN-US
How To Scale Myroster With Flash Memory From Hgst On A Flash Flash Flash Memory On A Slave Server
White Paper October 2014 Scaling MySQL Deployments Using HGST FlashMAX PCIe SSDs An HGST and Percona Collaborative Whitepaper Table of Contents Introduction The Challenge Read Workload Scaling...1 Write
HGST Virident Solutions 2.0
Brochure HGST Virident Solutions 2.0 Software Modules HGST Virident Share: Shared access from multiple servers HGST Virident HA: Synchronous replication between servers HGST Virident ClusterCache: Clustered
Building a Flash Fabric
Introduction Storage Area Networks dominate today s enterprise data centers. These specialized networks use fibre channel switches and Host Bus Adapters (HBAs) to connect to storage arrays. With software,
Solution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux
Solution Brief July 2014 All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Traditional SAN storage systems cannot keep up with growing application performance needs.
Shared Storage Setup with System Automation
IBM Tivoli System Automation for Multiplatforms Authors: Markus Müller, Fabienne Schoeman, Andreas Schauberer, René Blath Date: 2013-07-26 Shared Storage Setup with System Automation Setup a shared disk
MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products
MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with
High Availability Solutions for the MariaDB and MySQL Database
High Availability Solutions for the MariaDB and MySQL Database 1 Introduction This paper introduces recommendations and some of the solutions used to create an availability or high availability environment
Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform
On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform Page 1 of 16 Table of Contents Table of Contents... 2 Introduction... 3 NoSQL Databases... 3 CumuLogic NoSQL Database Service...
MaxDeploy Hyper- Converged Reference Architecture Solution Brief
MaxDeploy Hyper- Converged Reference Architecture Solution Brief MaxDeploy Reference Architecture solutions are configured and tested for support with Maxta software- defined storage and with industry
Software-defined Storage at the Speed of Flash
TECHNICAL BRIEF: SOFTWARE-DEFINED STORAGE AT THE SPEED OF... FLASH..................................... Intel SSD Data Center P3700 Series and Symantec Storage Foundation with Flexible Storage Sharing
The Data Placement Challenge
The Data Placement Challenge Entire Dataset Applications Active Data Lowest $/IOP Highest throughput Lowest latency 10-20% Right Place Right Cost Right Time 100% 2 2 What s Driving the AST Discussion?
Intel RAID SSD Cache Controller RCS25ZB040
SOLUTION Brief Intel RAID SSD Cache Controller RCS25ZB040 When Faster Matters Cost-Effective Intelligent RAID with Embedded High Performance Flash Intel RAID SSD Cache Controller RCS25ZB040 When Faster
MySQL and Virtualization Guide
MySQL and Virtualization Guide Abstract This is the MySQL and Virtualization extract from the MySQL Reference Manual. For legal information, see the Legal Notices. For help with using MySQL, please visit
Cloud Server. Parallels. Key Features and Benefits. White Paper. www.parallels.com
Parallels Cloud Server White Paper Key Features and Benefits www.parallels.com Table of Contents Introduction... 3 Key Features... 3 Distributed Cloud Storage (Containers and Hypervisors)... 3 Rebootless
June 2009. Blade.org 2009 ALL RIGHTS RESERVED
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
Analyzing Big Data with Splunk A Cost Effective Storage Architecture and Solution
Analyzing Big Data with Splunk A Cost Effective Storage Architecture and Solution Jonathan Halstuch, COO, RackTop Systems [email protected] Big Data Invasion We hear so much on Big Data and
Software-defined Storage Architecture for Analytics Computing
Software-defined Storage Architecture for Analytics Computing Arati Joshi Performance Engineering Colin Eldridge File System Engineering Carlos Carrero Product Management June 2015 Reference Architecture
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Increase Database Performance by Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Flash Accelerator Card
Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Technology Paper Authored by Rick Stehno, Principal Database Engineer, Seagate Introduction Supporting high transaction
SOLUTION BRIEF AUGUST 2015. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux
AUGUT 2015 All-Flash erver-ide torage for Oracle Real Application Clusters (RAC) on Oracle Linux Introduction Traditional AN storage systems cannot keep up with growing application performance needs. The
Flash Performance for Oracle RAC with PCIe Shared Storage A Revolutionary Oracle RAC Architecture
Flash Performance for Oracle RAC with PCIe Shared Storage Authored by: Estuate & Virident HGST Table of Contents Introduction... 1 RAC Share Everything Architecture... 1 Oracle RAC on FlashMAX PCIe SSDs...
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
StorPool Distributed Storage Software Technical Overview
StorPool Distributed Storage Software Technical Overview StorPool 2015 Page 1 of 8 StorPool Overview StorPool is distributed storage software. It pools the attached storage (hard disks or SSDs) of standard
VERITAS Storage Foundation 4.3 for Windows
DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3
Choosing Storage Systems
Choosing Storage Systems For MySQL Peter Zaitsev, CEO Percona Percona Live MySQL Conference and Expo 2013 Santa Clara,CA April 25,2013 Why Right Choice for Storage is Important? 2 because Wrong Choice
Implementing Enterprise Disk Arrays Using Open Source Software. Marc Smith Mott Community College - Flint, MI Merit Member Conference 2012
Implementing Enterprise Disk Arrays Using Open Source Software Marc Smith Mott Community College - Flint, MI Merit Member Conference 2012 Mott Community College (MCC) Mott Community College is a mid-sized
How you configure Iscsi target using starwind free Nas software & configure Iscsi initiator on Oracle Linux 6.4
How you configure Iscsi target using starwind free Nas software & configure Iscsi initiator on Oracle Linux 6.4 Download the software from http://www.starwindsoftware.com/ Click on products then under
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial
New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN
New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN Updated: May 19, 2015 Contents Introduction... 1 Cloud Integration... 1 OpenStack Support... 1 Expanded
Windows Server 2008 R2 Essentials
Windows Server 2008 R2 Essentials Installation, Deployment and Management 2 First Edition 2010 Payload Media. This ebook is provided for personal use only. Unauthorized use, reproduction and/or distribution
EMC MIGRATION OF AN ORACLE DATA WAREHOUSE
EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions
Running Highly Available, High Performance Databases in a SAN-Free Environment
TECHNICAL BRIEF:........................................ Running Highly Available, High Performance Databases in a SAN-Free Environment Who should read this paper Architects, application owners and database
HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW
HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW 757 Maleta Lane, Suite 201 Castle Rock, CO 80108 Brett Weninger, Managing Director [email protected] Dave Smelker, Managing Principal [email protected]
Accelerating Real Time Big Data Applications. PRESENTATION TITLE GOES HERE Bob Hansen
Accelerating Real Time Big Data Applications PRESENTATION TITLE GOES HERE Bob Hansen Apeiron Data Systems Apeiron is developing a VERY high performance Flash storage system that alters the economics of
High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology
High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology Evaluation report prepared under contract with Brocade Executive Summary As CIOs
A virtual SAN for distributed multi-site environments
Data sheet A virtual SAN for distributed multi-site environments What is StorMagic SvSAN? StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical
FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures
Technology Insight Paper FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures By Leah Schoeb January 16, 2013 FlashSoft Software from SanDisk: Accelerating Virtual Infrastructures 1 FlashSoft
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability
MS Exchange Server Acceleration
White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba
The next step in Software-Defined Storage with Virtual SAN
The next step in Software-Defined Storage with Virtual SAN VMware vforum, 2014 Lee Dilworth, principal SE @leedilworth 2014 VMware Inc. All rights reserved. The Software-Defined Data Center Expand virtual
Linux Software Raid. Aug 2010. Mark A. Davis
Linux Software Raid Aug 2010 Mark A. Davis a What is RAID? Redundant Array of Inexpensive/Independent Drives It is a method of combining more than one hard drive into a logic unit for the purpose of: Increasing
Parallels Cloud Storage
Parallels Cloud Storage White Paper Best Practices for Configuring a Parallels Cloud Storage Cluster www.parallels.com Table of Contents Introduction... 3 How Parallels Cloud Storage Works... 3 Deploying
VMware Software-Defined Storage & Virtual SAN 5.5.1
VMware Software-Defined Storage & Virtual SAN 5.5.1 Peter Keilty Sr. Systems Engineer Software Defined Storage [email protected] @keiltypeter Grant Challenger Area Sales Manager East Software Defined
Analysis of VDI Storage Performance During Bootstorm
Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process
The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000
The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This
Maximizing Hadoop Performance and Storage Capacity with AltraHD TM
Maximizing Hadoop Performance and Storage Capacity with AltraHD TM Executive Summary The explosion of internet data, driven in large part by the growth of more and more powerful mobile devices, has created
VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Parallels Cloud Server 6.0
Parallels Cloud Server 6.0 Parallels Cloud Storage Administrator's Guide March 06, 2015 Copyright 1999-2015 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings GmbH
LOCKSS on LINUX. Installation Manual and the OpenBSD Transition 02/17/2011
LOCKSS on LINUX Installation Manual and the OpenBSD Transition 02/17/2011 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 7 BIOS Settings... 10 Installation... 11 Firewall
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)
Unlock the value of data with smarter storage solutions.
Unlock the value of data with smarter storage solutions. Data is the currency of the new economy.... At HGST, we believe in the value of data, and we re helping the world harness its power.... Data is
Tushar Joshi Turtle Networks Ltd
MySQL Database for High Availability Web Applications Tushar Joshi Turtle Networks Ltd www.turtle.net Overview What is High Availability? Web/Network Architecture Applications MySQL Replication MySQL Clustering
Violin Memory Arrays With IBM System Storage SAN Volume Control
Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This
Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V
Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource
Realizing the True Potential of Software-Defined Storage
Realizing the True Potential of Software-Defined Storage Who should read this paper Technology leaders, architects, and application owners who are looking at transforming their organization s storage infrastructure
LOCKSS on LINUX. CentOS6 Installation Manual 08/22/2013
LOCKSS on LINUX CentOS6 Installation Manual 08/22/2013 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 6 BIOS Settings... 9 Installation... 10 Firewall Configuration...
An Oracle White Paper November 2010. Oracle Real Application Clusters One Node: The Always On Single-Instance Database
An Oracle White Paper November 2010 Oracle Real Application Clusters One Node: The Always On Single-Instance Database Executive Summary... 1 Oracle Real Application Clusters One Node Overview... 1 Always
HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology
Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology Evaluation report prepared under contract with NetApp Introduction As flash storage options proliferate and become accepted in the enterprise,
StarWind Virtual SAN for Microsoft SOFS
StarWind Virtual SAN for Microsoft SOFS Cutting down SMB and ROBO virtualization cost by using less hardware with Microsoft Scale-Out File Server (SOFS) By Greg Schulz Founder and Senior Advisory Analyst
Flash Memory Arrays Enabling the Virtualized Data Center. July 2010
Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,
IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
NEC Corporation of America Intro to High Availability / Fault Tolerant Solutions
NEC Corporation of America Intro to High Availability / Fault Tolerant Solutions 1 NEC Corporation Technology solutions leader for 100+ years Established 1899, headquartered in Tokyo First Japanese joint
Microsoft SQL Server 2014 Fast Track
Microsoft SQL Server 2014 Fast Track 34-TB Certified Data Warehouse 103-TB Maximum User Data Tegile Systems Solution Review 2U Design: Featuring Tegile T3800 All-Flash Storage Array http:// www.tegile.com/solutiuons/sql
MySQL performance in a cloud. Mark Callaghan
MySQL performance in a cloud Mark Callaghan Special thanks Eric Hammond (http://www.anvilon.com) provided documentation that made all of my work much easier. What is this thing called a cloud? Deployment
Express5800 Scalable Enterprise Server Reference Architecture. For NEC PCIe SSD Appliance for Microsoft SQL Server
Express5800 Scalable Enterprise Server Reference Architecture For NEC PCIe SSD Appliance for Microsoft SQL Server An appliance that significantly improves performance of enterprise systems and large-scale
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
ORACLE DATABASE HIGH AVAILABILITY STRATEGY, ARCHITECTURE AND SOLUTIONS
ORACLE DATABASE HIGH AVAILABILITY STRATEGY, ARCHITECTURE AND SOLUTIONS DOAG Nuremberg - 17/09/2013 Kirill Loifman Oracle Certified Professional DBA www: dadbm.com Twitter: @loifmkir ELEMENTS OF HIGH AVAILABILITY
Red Hat Cluster Suite
Red Hat Cluster Suite HP User Society / DECUS 17. Mai 2006 Joachim Schröder Red Hat GmbH Two Key Industry Trends Clustering (scale-out) is happening 20% of all servers shipped will be clustered by 2006.
High Availability Solutions for MySQL. Lenz Grimmer <[email protected]> 2008-08-29 DrupalCon 2008, Szeged, Hungary
High Availability Solutions for MySQL Lenz Grimmer 2008-08-29 DrupalCon 2008, Szeged, Hungary Agenda High Availability in General MySQL Replication MySQL Cluster DRBD Links/Tools Why
UCS M-Series Modular Servers
UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend
Enabling Database-as-a-Service (DBaaS) within Enterprises or Cloud Offerings
Solution Brief Enabling Database-as-a-Service (DBaaS) within Enterprises or Cloud Offerings Introduction Accelerating time to market, increasing IT agility to enable business strategies, and improving
Microsoft Private Cloud Fast Track
Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with Nutanix technology to decrease
Deep Dive on SimpliVity s OmniStack A Technical Whitepaper
Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity
Parallels Cloud Server 6.0
Parallels Cloud Server 6.0 Parallels Cloud Storage Administrator's Guide August 18, 2015 Copyright 1999-2015 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings GmbH
PARALLELS SERVER BARE METAL 5.0 README
PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal
Network Attached Storage. Jinfeng Yang Oct/19/2015
Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability
Accelerating Applications and File Systems with Solid State Storage. Jacob Farmer, Cambridge Computer
Accelerating Applications and File Systems with Solid State Storage Jacob Farmer, Cambridge Computer SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise
NATIONAL POPULATION REGISTER (NPR)
NATIONAL POPULATION REGISTER (NPR) Project Name: NPR Version No: 1.0.0 Release Date: Group Name: NPR-ECIL Version Date: LINUX SERVER INSTALLATION AND CONFIGURATION FOR JAVA BASED NPR DATAENTRY SOFTWARE
Advances in Virtualization In Support of In-Memory Big Data Applications
9/29/15 HPTS 2015 1 Advances in Virtualization In Support of In-Memory Big Data Applications SCALE SIMPLIFY OPTIMIZE EVOLVE Ike Nassi [email protected] 9/29/15 HPTS 2015 2 What is the Problem We
Ultimate Guide to Oracle Storage
Ultimate Guide to Oracle Storage Presented by George Trujillo [email protected] George Trujillo Twenty two years IT experience with 19 years Oracle experience. Advanced database solutions such
Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center
White Paper Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center Provide accelerated data access and an immediate performance boost of businesscritical applications with caching
Flash for Databases. September 22, 2015 Peter Zaitsev Percona
Flash for Databases September 22, 2015 Peter Zaitsev Percona In this Presentation Flash technology overview Review some of the available technology What does this mean for databases? Specific opportunities
REDEFINING THE ENTERPRISE OS RED HAT ENTERPRISE LINUX 7
REDEFINING THE ENTERPRISE OS RED HAT ENTERPRISE LINUX 7 Rodrigo Freire Sr. Technical Account Manager May 2014 1 Roadmap At A Glance CY2010 CY2011 CY2012 CY2013 CY2014.0 RHEL 7 RHEL 5 Production 1.1.0 RHEL
What s new in Hyper-V 2012 R2
What s new in Hyper-V 2012 R2 Carsten Rachfahl MVP Virtual Machine Rachfahl IT-Solutions GmbH & Co KG www.hyper-v-server.de Thomas Maurer Cloud Architect & MVP itnetx gmbh www.thomasmaurer.ch Before Windows
Infortrend ESVA Family Enterprise Scalable Virtualized Architecture
Infortrend ESVA Family Enterprise Scalable Virtualized Architecture R Optimized ROI Ensures the most efficient allocation of consolidated capacity and computing power, and meets wide array of service level
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Network (RHN) Satellite server is an easy-to-use, advanced systems management platform
Cluster Configuration Manual Cluster configuration on Database Servers
Cluster configuration on Database Servers - 1 - Table of Contents 1. PREREQUISITES BEFORE SETTING UP CLUSTER... 3 2. INSTALLING CLUSTER PACKAGES... 3 3. CLUSTER CONFIGURATION... 4 3.1 CREATE NEW CONFIGURATION...
SLIDE 1 www.bitmicro.com. Previous Next Exit
SLIDE 1 MAXio All Flash Storage Array Popular Applications MAXio N1A6 SLIDE 2 MAXio All Flash Storage Array Use Cases High speed centralized storage for IO intensive applications email, OLTP, databases
Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment
Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Satellite server is an easy-to-use, advanced systems management platform for your Linux infrastructure.
Storage Management for the Oracle Database on Red Hat Enterprise Linux 6: Using ASM With or Without ASMLib
Storage Management for the Oracle Database on Red Hat Enterprise Linux 6: Using ASM With or Without ASMLib Sayan Saha, Sue Denham & Lars Herrmann 05/02/2011 On 22 March 2011, Oracle posted the following
Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team
Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
MySQL Administration and Management Essentials
MySQL Administration and Management Essentials Craig Sylvester MySQL Sales Consultant 1 Safe Harbor Statement The following is intended to outline our general product direction. It
Red Hat Global File System for scale-out web services
Red Hat Global File System for scale-out web services by Subbu Krishnamurthy (Based on the projects by ATIX, Munich, Germany) Red Hat leads the way in delivering open source storage management for Linux
Accelerating Cassandra Workloads using SanDisk Solid State Drives
WHITE PAPER Accelerating Cassandra Workloads using SanDisk Solid State Drives February 2015 951 SanDisk Drive, Milpitas, CA 95035 2015 SanDIsk Corporation. All rights reserved www.sandisk.com Table of
Accelerating Microsoft Exchange Servers with I/O Caching
Accelerating Microsoft Exchange Servers with I/O Caching QLogic FabricCache Caching Technology Designed for High-Performance Microsoft Exchange Servers Key Findings The QLogic FabricCache 10000 Series
Flash Controller Architecture for All Flash Arrays
Flash Controller Architecture for All Flash Arrays Andy Walls Distinguished Engineer, IBM Systems and Technology Group CTO, Flash Systems and Technology Santa Clara, CA August 2013 1 Once upon a Time,.....
